31 C
Vientiane
Thursday, October 17, 2024
spot_img

Appier highlights groundbreaking AI research with three papers accepted at NeurIPS and EMNLP

This Week

TAIPEI, TAIWAN – Media OutReach Newswire – 17 October 2024 – Appier, a software-as-a-service (SaaS) company leveraging artificial intelligence (AI) to drive business decision-making, is excited to announce that all three research papers from its AI Research Team have been accepted at two of the world’s most prestigious AI conferences, NeurIPS[1], and EMNLP[2]. This remarkable achievement highlights Appier’s advanced AI research capabilities, particularly in the development of Large Language Models (LLMs), and reinforces the company’s leadership in cutting-edge technology and innovation.

As part of its ongoing commitment to AI innovation and academic collaboration, Appier established a dedicated AI research team in February 2024 to further enhance its technical capabilities. By presenting research at globally recognized academic forums, Appier continues to demonstrate its extensive expertise. As one of the few Asia-based companies to have all its submissions selected by NeurIPS and EMNLP this year, Appier’s excellence and leadership in AI and Natural Language Processing (NLP) are earning well-deserved international recognition.

These research findings will be integrated across Appier’s full product suite, including its advertising, personalization, and data cloud SaaS platforms. Examples of applications include creative generation and performance optimization in advertising, knowledge bots, real-time product advisors and e-commerce customer service, hyper-personalized marketing solutions, autonomous report generation for customer data platforms, and industry-specific model optimizations. These innovations align with Appier’s mission to transform AI into a measurable ROI for its clients, driving tangible business growth.

Chih-Han Yu, CEO and co-founder of Appier, said, “AI has always been at the heart of Appier’s DNA, driving us to explore groundbreaking research in AI and LLMs, and their limitless potential in new frontiers. The acceptance of all three of our papers is a tremendous validation of the hard work and talent of our AI research team. With our strong R&D foundation, we are committed to accelerating data utilization and model optimization to unlock new business value and opportunities, bringing AI to the forefront of business success.”

NeurIPS and EMNLP are among the most prestigious academic conferences in the fields of AI and NLP, attracting leading experts and scholars from around the world. NeurIPS, often referred to as the “Olympics of AI,” has been held annually since 1987 and covers a broad range of topics, including neural networks, deep learning, and statistics. In 2024, NeurIPS received 15,600 submissions, with an acceptance rate of around 25.3% for its Datasets and Benchmarks Track. EMNLP, established in 1996, is a key conference in the NLP domain, focusing on technical breakthroughs and empirical research. This year, it received over 6,105 submissions for its Main Track, with an acceptance rate of approximately 20.8%, while the Industry Track had an acceptance rate of 36.53%.

As Appier continues to lead in AI innovation, the company remains deeply invested in pioneering AI technologies and advancing LLM research. With AI constantly evolving, Appier is committed to collaborating with top academic experts and industry leaders to explore transformative technologies, delivering practical, cutting-edge applications that will transform digital advertising and marketing.

Appier is actively recruiting research scientists, engineers, and MarTech professionals to accelerate product innovation and development, addressing the growing business needs of our clients. We warmly invite talented candidates to join us in shaping the future of AI!

###

Appendix: Introduction to Accepted Papers

The first paper, “StreamBench: Towards Benchmarking Continuous Improvement of Language Agents,” has been accepted by the prestigious NeurIPS conference for its Datasets and Benchmarks Track, often regarded as the “Olympics of AI”. This paper introduces StreamBench, a pioneering benchmark designed to evaluate the continuous improvement of LLM agents over an input-feedback sequence. While most benchmarks focus on innate LLM capabilities, StreamBench addresses ongoing improvement by simulating an online learning environment, enabling LLMs to receive feedback streams and optimize performance in real time. This research not only proposes a simple yet effective benchmarking method but also provides a comprehensive analysis of key factors for successfully implementing streaming strategies. The study proposes effective baselines and paves the way for more adaptive AI systems in dynamic, real-time scenarios.

The second paper, “I Need Help! Evaluating LLM’s Ability to Ask for Users’ Support: A Case Study on Text-to-SQL Generation,” has been accepted by the renowned EMNLP conference for its Main Track. This research examines LLMs’ proactive ability to seek user support to enhance their performance, using text-to-SQL (structured query language) generation as a case study. The Appier AI research team aimed to understand the trade-off between improved LLM performance and the burden placed on users by asking additional questions. The study also explores whether LLMs can identify when they need user assistance and how different levels of available information affect their performance. Experimental results show that, without external feedback, many LLMs struggle to recognize when support is needed, underscoring the importance of external signals and providing valuable insights for future research on optimizing support-seeking strategies.

The third paper, “Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models,” has also been accepted by the EMNLP 2024 conference for its Industry Track. This research investigates the effects of structured generation—where content creation is confined to standardized formats like JSON or XML—versus freeform generation, to assess how these constraints impact LLM performance[3], particularly in reasoning and domain knowledge comprehension. Through extensive evaluations, the study reveals a surprising insight: strict format constraints significantly impair LLMs’ reasoning capabilities, highlighting the trade-offs between structured content and information extraction.


[1] NeurIPS(Conference on Neural Information Processing Systems)

[2] EMNLP(Empirical Methods in Natural Language Processing)

[3] In practical applications, standardized formats (such as JSON or XML) are widely used for extracting key output information from LLMs.

Hashtag: #appier #ai #neurips #emnlp


The issuer is solely responsible for the content of this announcement.

About Appier

Appier is a software-as-a-service (SaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe, and the U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit for more information.

Latest article