ServiceNow Research

ServiceNow Research

Research Services

Montreal, Quebec 43,748 followers

Unlock work experiences of the future. Follow ServiceNow Research as we advance the state-of-the-art in Enterprise AI.

About us

ServiceNow (NYSE: NOW) makes the world work better for everyone. Our cloud-based platform and solutions help digitize and unify organizations so that they can find smarter, faster, better ways to make work flow. So employees and customers can be more connected, more innovative, and more agile. And we can all create the future we imagine. The world works with ServiceNow. For more information, visit www.servicenow.com. ServiceNow Research, part of the Advanced Technology Group at ServiceNow, advances the state-of-the-art in Enterprise AI. In equal measure, we innovate, research, experiment, and mature AI technologies that create compelling user experiences of the future so that every ServiceNow user benefits from AI. We believe AI should be built responsibly, without compromise to fairness, ethics, accountability, and transparency. ServiceNow Research programs drive innovation into Low Data Learning, Human Decision Support, and Human-Machine Interaction Through Language. Low Data Learning studies Machine Learning methods that enable adapting efficiently to varied and changing datasets. ServiceNow Research focuses on a wide range of downstream tasks such as language understanding, computer vision, robotic automation, and learning workflows. ServiceNow Research for Human Decision Support aims to assist decision-makers and increase customer productivity 1) Directing requests to the right sequence of decision-makers 2) Presenting appropriate information to aid decisions 3) Suggesting possible courses of action. ServiceNow Research studies in Human-Machine Interaction Through Language advance AI technology to enable the next generation of Language User Interfaces with a focus on natural conversational human-computer interactions and AI-assisted programming. The AI Trust and Governance Lab guides ServiceNow and its customers in their AI strategy and deployment via governance frameworks and applied research in trustworthiness. We’re hiring!

Website
https://www.servicenow.com/research/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Montreal, Quebec
Type
Public Company
Specialties
Artificial Intelligence, Machine Learning, Software, Operations Research, Natural Language Processing, Neural Network, Research, Deep Learning, Computer Vision, Climate Change, Trustworthy AI, Responsible AI, AI for Good, Natural Language Understanding, Human Decision Support, Computer Science, Funadamental Research, Applied Research, Research Transfer, Data Science, Reinforcement Learning, and Research Collaboration

Locations

Employees at ServiceNow Research

Updates

  • View organization page for ServiceNow Research, graphic

    43,748 followers

    Thrilled to announce that ServiceNow has won VentureBeat's AI Innovation Award for best enterprise software implementation of GenAI! This award recognizes our work with Hugging Face and the BigCode community on StarCoder, a unique family of code LLMs built following open scientific development principles. StarCoder empowers organizations to customize LLMs for their specific needs, fostering a truly collaborative and open AI ecosystem. Huge congrats to the Hugging Face and ServiceNow Research teams for their collaboration and stewardship of BigCode. We look forward to many more exciting collaborations that push the boundaries of open innovation and open-source AI. #AI #OpenSource #GenAI #StarCoder #BigCode #ServiceNow #HuggingFace

    • No alternative text description for this image
    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Sathwik Tejaswi Madhusudan, graphic

    Architect and Principal Research Scientist @ ServiceNow AI

    𝗠𝟮𝗟𝗶𝗻𝗴𝘂𝗮𝗹 🔈 🔥 𝗡𝗲𝘄 𝗗𝗮𝘁𝗮𝘀𝗲𝘁 𝗔𝗹𝗲𝗿𝘁 ❗❗❗ ⭐ 𝗠𝗨𝗟𝗧𝗜 𝗟𝗜𝗡𝗚𝗨𝗔𝗟 + 𝗠𝗨𝗟𝗧𝗜 𝗧𝗨𝗥𝗡 + 𝗦𝗬𝗡𝗧𝗛𝗘𝗧𝗜𝗖 + 𝗧𝗔𝗫𝗢𝗡𝗢𝗠𝗬 𝗚𝗨𝗜𝗗𝗘𝗗 + 𝗘𝗩𝗢𝗟𝗦 ⭐ 🥇 𝟏𝟖𝟐𝐊 𝐦𝐮𝐥𝐭𝐢𝐥𝐢𝐧𝐠𝐮𝐚𝐥 𝐢𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧𝐬 🗣 𝟕𝟏 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞𝐬 🌐 𝟯𝟳 𝗹𝗼𝘄 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀 🤹♂️ 𝟭𝟵 𝗱𝗶𝘃𝗲𝗿𝘀𝗲 𝗺𝘂𝗹𝘁𝗶𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝗡𝗟𝗣 𝘁𝗮𝘀𝗸𝘀 🏆 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝘀 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗯𝘆 𝘂𝗽𝘁𝗼 𝟯𝟬% 𝗼𝗻 𝗺𝘂𝗹𝘁𝗶-𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝘁𝗮𝘀𝗸𝘀 🔍 𝗗𝗲𝗺𝗼𝗻𝘀𝘁𝗿𝗮𝘁𝗲𝘀 𝗠𝘂𝗹𝘁𝗶-𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝗦𝗙𝗧 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝘀 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝘁𝗲𝘀 𝘀𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁𝗹𝘆 Evol mutations to generate more IFT pairs are guided by: - 𝐍𝐋𝐏 𝐭𝐚𝐬𝐤 𝐬𝐩𝐞𝐜𝐢𝐟𝐢𝐜 𝐞𝐯𝐨𝐥 𝐭𝐚𝐱𝐨𝐧𝐨𝐦𝐲. - Reasoning, slang, dialect, transliteration, etc for general instructions. - 𝗠𝐮𝐥𝐭𝐢-𝐭𝐮𝐫𝐧 𝐞𝐯𝐨𝐥 𝐭𝐚𝐱𝐨𝐧𝐨𝐦𝐲 for generating different types of - {follow-up, refinement, expansion, recollection} conversational follow ups 🔄 For more, checkout our paper below! Authors - Rishabh Maheshwary Vikas Yadav, PhD Hoang Nguyen Khyati Mahajan Sathwik Tejaswi Madhusudan 🌟 We are also actively hiring if you want to join us on our journey 🚀 Dataset: https://lnkd.in/e5E-VrN8 #genai #LLMs #research #machinelearning #deeplearning #nlp #huggingface #dataset #multilingual #sft #instructionfollowing #ai #chatgpt #gpt4 #servicenow #hiring #jobs

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Sai Rajeswar, graphic

    Senior Research Scientist at ServiceNow Research

    🚨 It is time for foundation models for Embodied AI. Excited to Introduce GenRL! An AI agent that learns multimodal foundation world models 🌍 By connecting the multimodal knowledge of foundation models with the embodied knowledge of world models for RL, GenRL enables turning vision and language prompts into actions! We release paper, code, datasets, models, and a Gradio interactive demo! Visit the project website to know more: https://lnkd.in/e8TenzbP Pietro Mazzaglia, Aaron Courville, Tim Verbelen, ServiceNow Research, Mila - Quebec Artificial Intelligence Institute Bonus: we also tested GenRL on the Minecraft learning environment. The agent can identify biomes easily but struggles with accurate tasks, e.g. crafting objects. In the future, we will extend GenRL to more complex Minecraft tasks, e.g. using MineCLIP from Jim Fan

  • View organization page for ServiceNow Research, graphic

    43,748 followers

    📢 New paper out! We release RepLiQA: a Q&A dataset designed to evaluate language models in a truly unseen context. RepLiQA covers made-up entities so that knowledge obtained during training cannot be re-used to answer its questions. #LLM #Evaluation @ServiceNow @ServiceNowRSRCH 🚀 RepLiQA comprises 90k question-answer pairs and 18k reference documents across 17 categories. The data is partitioned into 5 splits to limit the exposure of our documents and annotations to potential scraping. We've just released the first split! 🔍 Current evaluation benchmarks for LLMs may be compromised due to potential data leakage into training or due to them covering well documented facts. Evaluation may be confounded by knowledge from pre-training, posing challenges in assessing performance. #RepLiQA #DataIntegrity 🔎 We conducted a large-scale benchmarking set of experiments with 18 state-of-the-art LLMs. Our study reveals insights into how LLMs rely on internal memory versus external prompts. We relied on @OpenRouterAI to easily evaluate multiple models. See https://lnkd.in/g5pkqvtx 🔗 Access RepLiQA: Available splits can be accessed via @huggingface and found at https://lnkd.in/gznfmHN8. Stay tuned for upcoming releases of new test splits every two months starting December 2024! 📆📥 🔗 Find more details and results on our paper or blog post: - https://lnkd.in/gdHCT8Y4 - https://lnkd.in/gbJZmfNq We also provide code for getting the data and running inference at: - https://lnkd.in/g5pkqvtx

    GitHub - ServiceNow/repliqa: A Question-Answering Dataset on Unseen Content [Details Coming Soon!]

    GitHub - ServiceNow/repliqa: A Question-Answering Dataset on Unseen Content [Details Coming Soon!]

    github.com

  • ServiceNow Research reposted this

    View profile for Kevin Klyman, graphic

    AI Policy @ Stanford + Harvard

    🚨Publication alert🚨 Today we are releasing the next version of the Foundation Model Transparency Index! In addition to our paper, we are publishing transparency reports for 14 top AI companies. A bit of background: in October 2023, my team at Stanford Institute for Human-Centered Artificial Intelligence (HAI) partnered with researchers at the Princeton Center for Information Technology Policy and MIT Media Lab to assess the transparency of 10 major foundation model developers like OpenAI and Meta. We set out to score the transparency of these companies based on publicly available information on their most prominent AI models. The rubric was 100 transparency indicators covering issues related to the data, labor, and compute used to build foundation models, the risks and capabilities of these models, and their downstream impact. We found that there is a fundamental lack of transparency among companies that build foundation models. Companies scored just 37 points out of 100 on average, with the top score barely eclipsing 50. They disclosed little if any information about the data used to build their AI models or their real-world impact. 📜 We have just published a follow up study six months on. For this version of the Foundation Model Transparency Index, we reached out to companies and asked that they prepare reports to disclose their practices in relation to our 100 transparency indicators. Fourteen companies agreed, and today we are making these transparency reports publicly available along with a paper describing our findings on how transparency has changed in the foundation model ecosystem. These reports include information the developers had not made public until we began this follow-up study; on average, each developer shared information related to 16 indicators that had not previously been public. 📊 We find that while there is substantial room for improvement, transparency has increased in some areas. The average score rose from 37 to 58 out of 100, with improved transparency related to risks and how companies enforce their policies to prevent such risks. Companies scored points on just 17% of the compute-related indicators in October 2023, whereas they now score 51%, reflecting the fact that several additional companies now disclose the amount of compute, hardware, and energy required to build their flagship foundation model. However, there is a systemic lack of transparency in some areas of the AI supply chain. Companies lack transparency on issues relating to data, like the copyright status of and presence of PII in the data used to build foundation models. Companies also do not share information about the nature of their models’ downstream impact, such as the number of users and how their models are used. Let me know what you think of the paper! Thanks to my coauthors Rishi Bommasani, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, and Percy Liang

    • No alternative text description for this image
  • ServiceNow Research reposted this

    Interested in AI trustworthiness and governance? Come join our amazing team at ServiceNow Research!

    View profile for Jason Stanley, graphic

    AI trust, safety, governance @ ServiceNow

    We are hiring two Applied Research Scientists in the Trust & Governance Applied Research Lab at ServiceNow Research. Come work with us! Link in comment thread below. Today, our major focus is on model and system evaluation. What kinds of test & eval need to happen, at what cadence, by whom and with what kind of governance system to ensure high-quality and trustworthy AI is being produced? Benchmarks are useful, but they're like the mini-tests of evaluation. How do we combine best-in-class, risk-aware, well-rounded approaches to evaluating LLMs and AI-powered applications? And importantly for applied research, how to prioritize and de-risk the most promising nascent approaches appearing on the horizon to further bolster that broad evaluation system? In my mind, this is one of the most important and complex challenges in AI today. Come help us tackle that challenge! Link to the job board in the comments. #artificialintelligence #trustworthyai #llm #aisafety

    • No alternative text description for this image
  • ServiceNow Research reposted this

    View profile for Arnab Mondal, graphic

    Researcher at Mila - Quebec Artificial Intelligence Institute

    I'll be presenting our paper on Efficient dynamics modeling for RL and Planning using Koopman Theory next week at #ICLR2024 with Siba Smarak Panigrahi. Looking forward to the discussions. Many thanks to Mila - Quebec Artificial Intelligence Institute ServiceNow Research Details : https://lnkd.in/ee8J9Vj6

    Efficient Dynamics Modeling in Interactive Environments with Koopman Theory

    iclr.cc

  • ServiceNow Research reposted this

    View profile for Alexandre Lacoste, graphic

    Staff Research Scientist at ServiceNow Research. Remote Sensing, Climate Change, Causal Inference, Reinforcement Learning

    🌐 Next-Gen Web-Agent Internship. 🚀 ServiceNow Research is offering a research internship focused on the development of Web Agents and UI assistants. This project aims to develop agents capable of accomplishing complex tasks over web UI. Specifically, the developed agents should boost 📈 performance on WorkArena and WebArena. A significant portion of our research agenda is dedicated to fine-tuning Large Language Models (LLMs) for improved comprehension of HTML-based UIs. Concurrently, we aim to address the ethical and cybersecurity implications of web agents. Lastly, multimodal agents capable of understanding UI through pixels, and reinforcement learning tuning are also part of the agenda. Candidate must be affiliated with a 🇨🇦 Canadian University to be eligible for Mitacs Internships. 🔗 Apply using this form : https://bit.ly/3UQCk3X

    • No alternative text description for this image

Affiliated pages

Similar pages

Browse jobs