Last updated: April 2026
At Glaut, we believe transparency is a prerequisite for trust. That's why we have taken the initiative to ask - and answer - these 20 Questions ourselves. Our goal is to give researchers a clear, honest view of how our AI-based services operate, where their limits are, and how we ensure ethical, high-quality research at scale.
Glaut is an AI-native market research platform purpose-built for experienced researchers - specifically research professionals working at full-service market research agencies and in-house research teams at brands and organisations. We are not a general-purpose AI tool: every design decision is grounded in real research methodology and built to meet the standards of professionals who run research for a living.
To date, researchers using Glaut have conducted more than 200,000 AI-moderated interviews across hundreds of projects and markets worldwide. Our customers include leading research organisations such as IPSOS, Global Strategy Group, Augur, and Bthrough Research.
Our multidisciplinary team combines deep research expertise with specialised engineering capabilities in large language model (LLM) orchestration, natural language processing (NLP), and conversational AI. This combination - researchers and AI engineers working in the same product loop - is intentional: it ensures the platform speaks the language of a professional researcher, not just a data scientist.
Since inception, Glaut has been continuously improving its AI agents - including the moderator - using learnings from thousands of interviews run on the platform. This iterative, real-world feedback loop is the foundation of our agent quality.
Glaut has been recognised by the research industry for its methodological contribution: in 2025, Glaut received the ESOMAR Breakthrough Research Methodology Award, the most significant independent validation of AIMI as a credible new approach to research.
Glaut holds ISO/IEC 27001 certification, independently audited and renewed annually, and is a proud ESOMAR Corporate Member.
AI addresses two structural constraints that have defined research tradeoffs for decades: the choice between scale and depth, and the cost of manual analysis.
Glaut's platform operates across two distinct but complementary capabilities:
AI-Moderated Interviews (AIMIs) replace closed-ended questionnaires with conversational, AI-led interviews at scale. Like a survey, AIMI reaches thousands of respondents through a single link and produces structured, coded, quantitative outputs - dashboards, CSV/SPSS exports, cross-tabs. Unlike a survey, it captures open-ended verbatim responses with dynamic, personalised follow-up probing. The result is a methodology that gives agency researchers and in-house teams quantitative efficiency alongside qualitative depth - within a single workflow, without the cost or time burden of a traditional hybrid design.
AI-Powered Analysis applies the same analytical engine to large volumes of open-ended data regardless of how it was collected - including data from traditional surveys, existing databases, or third-party datasets. Experienced researchers can use Glaut to apply automated thematic coding, entity recognition, sentiment analysis, and interpretative frameworks to thousands of verbatim responses - eliminating weeks of manual coding. This capability is particularly valuable for agency teams handling large-scale tracker studies, or in-house research functions processing CX open-ends or employee feedback at volume.
The evidence for these claims is not just internal. Independent research validates the AIMI approach:
On project speed: for a significant portion of projects run on Glaut, researchers complete both fieldwork and coded analysis within 24 hours - a turnaround that is structurally impossible with traditional qualitative workflows. Actual timelines depend on project complexity, sample size, and length of interview.
What has worked well
The core AI moderation capability - dynamic, personalised follow-up probing in real time, across 50+ languages - has proven highly effective in the field across hundreds of projects. Experienced researchers at agencies and in-house teams consistently find that AIMI delivers results comparable to qualitative depth studies at a fraction of the time and cost. Completion rates are strong, verbatim quality is high, and the automated coding pipeline produces analysis that researchers can work with immediately rather than spending days on manual processing.
The decision to build on a model-agnostic, multi-provider LLM architecture rather than a proprietary model has also proven correct. It gives researchers access to best-in-class AI performance per task without locking Glaut - or its customers - into a single provider's capability curve.
The Research on Research programme - our series of independent academic and industry studies evaluating AIMI against established methods - has worked well as a trust-building mechanism with professional buyers. Experienced researchers are rightly sceptical of vendor claims. Independent evidence from Curtin University, University of Mannheim, and Human Highway provides the empirical grounding that professional researchers require before adopting a new methodology.
What has worked less well, and why
The most persistent challenge is methodology positioning. AIMI is a genuinely new research methodology - not a survey, not a traditional qualitative study, and not simply a chatbot. Researchers trained in either paradigm initially struggle to place it correctly. Some attempt to replicate advanced survey logic (MaxDiff, conjoint, discrete choice) that AIMI is not designed for; others compare it directly to human-moderated IDIs and are disappointed it does not replicate every dimension of a trained human facilitator. Both comparisons miss the point: AIMI is a distinct methodology that sits between the two, combining quantitative scalability with conversational depth.
UX for researcher-facing configuration - designing interview flows, setting analytical objectives, reviewing coded outputs - has required significant ongoing investment. Experienced researchers have high expectations for professional tooling, and meeting that standard is a continuous area of development.
Glaut is built for experienced researchers at market research agencies and in-house research teams who need to collect and analyse large volumes of open-ended data efficiently and rigorously.
The platform's AI does three things:
Conducts interviews at scale. A researcher designs the interview - the questions, the topics, the flow, any closed-ended components - and Glaut's AI moderator conducts simultaneous interviews with all respondents. It listens to each answer and generates an appropriate follow-up question in real time, producing the personalised probing depth of a 1-to-1 interview at survey scale. Researchers retain full control over the interview design; the AI operates strictly within the structure they define.
Analyses open-ended data. Once fieldwork is complete - or when a researcher uploads existing open-ended data from any source - Glaut's analysis agents automatically code responses into themes and sub-codes, identify entities (brands, products, competitors), apply sentiment and emotion analysis, and execute interpretative frameworks defined by the researcher (e.g., inferred purchase intent, churn likelihood, or NPS without directly asking). This pipeline applies equally to AIMI data and to large volumes of open-ended responses from traditional surveys, trackers, or CX programmes - making Glaut a standalone analysis tool for agency teams managing ongoing studies at scale.
Generates structured outputs. All outputs are immediately usable: an interactive dashboard for exploration and segmentation, a CSV or SPSS file for statistical analysis in external tools, and a modular executive report built around the researcher's stated objectives. Every output is fully editable and remains under the researcher's review and approval before any client delivery.
Glaut's platform is built on top of state-of-the-art LLMs from leading AI providers, accessed through enterprise-grade APIs with the highest available security and data privacy guarantees. Specifically, we currently use:
Glaut is deliberately model-agnostic. We evaluate and deploy the best-performing model for each specific task - moderation, coding, analysis, reporting - and can switch or combine models as the landscape evolves. This architecture protects customers from single-provider risk and ensures access to the most capable available technology at all times.
All three providers operate under enterprise agreements that explicitly prohibit customer data from being used to train their foundation models. Their data policies are publicly documented on their respective security and compliance sites.
Glaut does not build proprietary foundation models. Our core IP lies in the orchestration layer: the prompt engineering, agent design, evaluation pipelines, and research methodology logic built on top of these models - refined continuously using learnings from thousands of interviews conducted on the platform.
Glaut's AI agents operate through a layered architecture. A researcher-defined interview script or analytical objective provides the structure and goals. LLMs handle the conversational moderation and analysis. Glaut's orchestration layer applies proprietary prompting, context management, and output formatting to ensure results are research-grade rather than generic AI outputs.
For moderation: the AI interprets each respondent's answer in full context - drawing on the conversation history and the researcher's stated objectives - and generates an appropriate follow-up probe in real time. Independent research by Curtin University confirmed that this approach produces disclosure and trustworthiness outcomes statistically equivalent to trained human moderators, without increasing participant discomfort.
For analysis: agents apply multi-level thematic coding (codes and sub-codes), entity recognition, and interpretative analysis. Researchers can use pre-built templates (sentiment, emotion, top-of-mind) or define fully custom analytical frameworks and codebooks. The same analytical pipeline applies to both AIMI data and large volumes of open-ended data from any source - enabling agency teams to analyse tracker verbatims, CX open-ends, or employee feedback datasets at scale without manual coding.
On training data: Glaut uses only proprietary platform data - aggregated and anonymised, never customer project data - to improve its agents. No respondent data from client projects is ever used in model training, and no LLM provider trains their foundation models on customer data under our enterprise agreements.
On multilingual capability: all three LLM providers we use are trained on extensive non-English corpora, and Glaut has validated performance across 50+ languages and regional dialect variants through real fieldwork, including Spanish variants (Spain, Mexico, Colombia, Argentina) and English variants (US, UK, Australia, South Africa).
Glaut employs a multi-layer validation approach designed for professional research standards:
Agent evaluations are the primary systematic QA mechanism. These are AI-driven assessments that run continuously across moderation behaviour, coding consistency, and analysis accuracy - measuring response relevance, thematic coding fidelity, and adherence to researcher-defined objectives at a scale and precision that manual review alone cannot achieve.
Human researcher oversight is the final gate at every stage. Experienced researchers review all coded outputs and reports in a fully editable interface before any client delivery. They can modify, merge, split, or reject any code or analytical output. Glaut explicitly does not produce final client deliverables autonomously - the researcher's professional judgment is always the deciding authority.
Bias and consistency controls include automated detection of inconsistent or contradictory respondent responses during fieldwork, flagged for researcher review. The analysis agents apply defined codebooks consistently across all responses, eliminating the inter-rater variability endemic to human coding at scale.
Hallucination risk: Glaut's analysis agents operate on real respondent verbatim - they interpret and code actual data rather than generating synthetic content. This substantially limits the hallucination risk inherent to open-ended generative AI applications. Where interpretative analysis is applied (e.g., inferred sentiment or purchase intent), outputs are clearly marked as AI-generated inferences and are subject to researcher review before use.
Fit-for-purpose assurance: Glaut's analysis is goal-based. Researchers define their research objectives and analytical lenses upfront; the system orients outputs to answer those specific questions. QA steps and best practices are documented in Glaut's internal project execution playbook, available to customers on request.
Glaut operates with full transparency about the boundaries of its methodology:
Methodology scope: AIMI is not appropriate for all research designs. Advanced quantitative techniques such as MaxDiff, conjoint analysis, or discrete choice experiments are outside its scope. We communicate this proactively, particularly with agency teams transitioning from survey-heavy workflows.
AI moderation vs. human moderation - a nuanced distinction: Independent biometric research by Curtin University confirmed that AI interviewers match human moderators on disclosure, trust, and comfort for most research contexts. However, the same study confirmed that human interviewers generate stronger emotional connection and rapport. Human moderation remains the appropriate choice for emotionally sensitive topics - grief research, clinical patient interviews, or any context where relational warmth is central to the research goal rather than incidental to it.
Highly specialised or technical content: LLMs can struggle with very niche domain vocabulary (advanced clinical pharmacology, specialist engineering). For such projects, researchers should provide domain-specific context in their interview scripts and apply additional scrutiny to analysis outputs.
Mitigation strategies: model-agnostic architecture enables deployment of the best available model per task; agent evaluation pipelines provide continuous systematic quality monitoring; human researcher review is always the final gate before client delivery; and Glaut's Customer Success team is available to support researchers on complex or sensitive projects.
Ethical responsibility is foundational to Glaut's design at every layer:
Regulatory compliance: Glaut is fully compliant with GDPR and CCPA. As a Data Processor, Glaut executes a Data Processing Agreement (DPA) with every customer before any project begins. Customers retain full ownership and control of their respondents' data.
No PII collection: Glaut does not collect any personally identifiable information from respondents. No names, emails, or identifying details are required or stored. Where customers need to track respondents for quota management, anonymised IDs can be passed via URL parameters - these are never exposed to Glaut as personal data.
No AI training on respondent data: zero interview data from client projects is used to train any AI model - either Glaut's agents or our LLM providers' foundation models. This is guaranteed under our enterprise API agreements with all three providers.
EU AI Act alignment: Based on the nature of Glaut's product - a decision-support tool used exclusively by professional researchers with full human oversight of study design, interpretation, and outputs - Glaut does not fall under the "high-risk AI system" categories defined by the EU AI Act. The platform is designed in line with the Act's principles for limited-risk AI systems: transparency, human oversight, auditability, and responsible use of AI-generated outputs. The EU AI Act does not currently provide for an official compliance certification; we monitor regulatory developments continuously and will align with any future harmonised standards as they become applicable.
Information security: ISO/IEC 27001 certified, independently audited annually. Controls include RBAC, MFA, TLS 1.2+ encryption for all data in transit, encrypted storage at rest, and EU-only server infrastructure.
Human oversight by design: every output - coded analysis, interpretative inference, executive report - is designed to be reviewed, edited, and validated by a professional researcher before reaching any end client or informing any decision.
Glaut operates on a principle of explicit AI disclosure at every level of the research process.
For respondents: the interview makes no attempt to simulate a human interviewer. Respondents hear a natural AI voice; there is no avatar, no visual character, and no video component. The AI nature of the interaction is not concealed. Research by Curtin University found this transparency did not reduce participant willingness to disclose or their comfort during the interview.
For researchers and buyers: every output produced by Glaut - transcripts, coded themes, interpretative analyses, report sections - is identified as AI-generated. The platform maintains a clear separation between raw respondent data and AI-processed outputs. Researchers understand these outputs are inputs to their professional judgment, not finished deliverables.
For end clients: it is the responsibility of the research agency or in-house team (as Data Controller) to disclose the methodologies and tools used in producing research outputs. Glaut provides full documentation of its AI-based methods to support this disclosure.
We publicly disclose the LLM providers powering our infrastructure (OpenAI via Azure, Gemini via GCP, Anthropic via AWS) and provide references to their data policies for independent review by buyers.
Glaut's ethical framework for AI behaviour is built around five operating principles, each with concrete implementation:
1. Human primacy. AI outputs are never final. Every coded output, interpretative inference, and report section is subject to researcher review and approval. The AI accelerates; the researcher decides.
2. Transparency over persuasion. The moderator AI is designed to probe for understanding, not to lead respondents. Interview structure is fully researcher-defined. The AI cannot introduce topics, framings, or angles not sanctioned by the researcher's brief.
3. No data exploitation. Respondent data is used exclusively for the project it was collected under. No aggregation, resale, or secondary use of respondent-level data occurs. This applies equally to AIMI data and to data uploaded for analysis from external sources.
4. Cultural and linguistic respect. Multilingual capability is not limited to translation. Glaut's agents are tested across regional dialect variants to avoid cultural flattening - a respondent answering in Colombian Spanish receives the same quality of probing as one answering in Castilian Spanish.
5. Scope honesty. We are explicit about what AIMI and AI analysis are not suited for. We do not oversell the methodology's scope to win projects that would produce inferior results. This includes proactively steering agency and in-house clients away from project designs where a different methodology would serve them better.
Human oversight is integrated at four distinct stages of every Glaut project:
Stage 1 - Study design. The researcher defines all interview questions, probing objectives, analytical frameworks, and output goals. The AI operates strictly within this structure. No autonomous topic generation occurs outside the researcher's defined scope.
Stage 2 - Fieldwork monitoring. During live data collection, researchers have real-time access to incoming interview data. Automated consistency checks flag anomalous respondent behaviour for human review.
Stage 3 - Analysis review. All AI-generated coded outputs - themes, sub-codes, entity tags, interpretative inferences - are presented to the researcher in a fully editable interface. Researchers can modify, merge, split, or reject any code before outputs are finalised.
A key feature of this stage is the flexibility in how analytical frameworks are constructed. Researchers can choose between three approaches: a fully human-engineered codebook (the researcher defines every code and sub-code before analysis runs); a fully AI-generated thematic framework (the system inductively builds themes from the verbatim); or a hybrid approach where human-defined ontologies are combined with AI-generated ones - allowing researchers to anchor the analysis around known constructs while still surfacing emergent themes the brief did not anticipate. This hybrid capability is particularly powerful for tracking studies, where stability of core codes matters alongside the ability to detect shifts in language and new concepts over time.
Stage 4 - Report sign-off. The Report Builder produces modular reports structured around researcher-defined goals. The researcher reviews, edits, and approves all content before any client delivery. Glaut does not produce final client artefacts autonomously.
This four-stage model aligns with the Human-in-the-Loop (HITL) principle defined by ESOMAR: humans are present throughout, retain override capability at every stage, and make all final decisions.
Data quality is managed at two levels:
Agent improvement data: Glaut uses only proprietary platform data - aggregated and anonymised - to improve its agents. This corpus spans thousands of real interviews across diverse markets, industries, languages, and research objectives. Before use for agent improvement, data undergoes quality filtering to exclude low-quality completions. The multi-market, multilingual composition of this corpus means our agents are not optimised exclusively for English-language or Western-market patterns.
Client project data: Glaut's automated consistency checks during fieldwork detect and flag responses that are implausibly brief, internally contradictory, or behaviourally anomalous. These are surfaced to researchers for review before inclusion in analysis. The University of Mannheim study independently confirmed this effectiveness: the AIMI condition produced zero gibberish responses in a controlled experiment, versus a 10% gibberish rate in the equivalent survey condition.
Respondent sampling and demographic representativeness are managed by the panel provider or distribution channel selected by the customer. Glaut supports quota enforcement through dynamic parameter-based ID tracking without requiring PII exposure.
For agent training data: exclusively Glaut proprietary data generated on the Glaut platform. No third-party datasets, scraped public data, or synthetic data are used. The lineage is: real interviews conducted on Glaut → aggregation and anonymisation → agent improvement pipelines. Documentation is available to enterprise customers on request.
For client project data: fully transparent. Data originates from respondents recruited by the customer via their chosen distribution channel. It is processed exclusively within Glaut's secure EU infrastructure. Data segregation is enforced at the Organisation level within the platform, with additional Workspace and Project-level access controls, ensuring no co-mingling across client accounts.
Available on request: sub-processor list, GDPR compliance summary, full Data Processing Agreement (DPA) template.
For respondent-facing privacy communication: Glaut can display its own privacy notice at the start of each interview, or link to the customer's privacy notice - ensuring transparency obligations toward respondents are met regardless of which entity acts as Data Controller for a given project.
Glaut's compliance framework is built around five principles:
1. Data minimisation. No personal identifiers are required to participate in a Glaut interview. Respondents access interviews via a link with no login, no email, and no name collection required.
2. Pseudonymisation. Where customers pass respondent IDs for quota management, these are anonymised parameters - they do not reveal respondent identity to Glaut.
3. Secure storage. All data is encrypted at rest and in transit (TLS 1.2+). Storage uses MongoDB Atlas; services are hosted on Heroku and AWS, all within EU infrastructure.
4. European hosting. All current servers are located within the EU. US-based infrastructure is available through our third-party providers for customers specifically requiring it.
5. Controller/Processor alignment. Glaut operates as a Data Processor; the research agency or in-house team operates as Data Controller. This division of responsibility is formally documented in a DPA executed before any project begins.
On consent: the responsibility for obtaining respondent consent lies with the Data Controller (the customer), consistent with GDPR obligations. Glaut supports this by enabling customers to link to their own consent or privacy notices within the interview flow.
Glaut's security posture is anchored by ISO/IEC 27001 certification, independently audited and renewed annually. Specific controls include:
Access controls: role-based access control (RBAC) at Organisation, Workspace, and Project levels. Superuser access is restricted to essential development team members. MFA is required for all system access.
Secrets management: secrets are stored via GitHub secrets in pipelines and Config Vars on Heroku. Access provisioning and deprovisioning follow a documented process aligned with onboarding and offboarding procedures.
Data in transit and at rest: all network communications are encrypted using TLS 1.2+ across frontend-to-backend, backend-to-database (MongoDB Atlas), and backend-to-LLM provider connections. Data is encrypted at rest in all storage layers.
Infrastructure resilience: Glaut runs on enterprise-grade cloud infrastructure (Heroku, AWS, MongoDB Atlas) with the uptime and redundancy guarantees inherent to those platforms. All three LLM providers (Azure, GCP, AWS) operate with enterprise SLAs and their own independently certified security frameworks.
Adversarial input handling: the AI moderation system detects and handles uncooperative or low-quality respondent inputs (e.g., non-sequitur responses, repeated entries, implausibly brief answers) through automated flagging mechanisms that surface these cases to researchers for review before inclusion in analysis.
Incident response: periodic security audits are conducted in alignment with ISO 27001 requirements. Documented procedures cover data deletion, access revocation, and breach notification consistent with GDPR Article 33 obligations.
Data ownership is unambiguous in Glaut's framework:
Respondent data: owned by the customer (Data Controller). Glaut processes it on the customer's behalf as a Data Processor under a DPA, with no rights to use, share, or commercially exploit that data independently.
Research outputs (coded analysis, reports, dashboards, exports): owned by the customer. Glaut makes no claim to intellectual property in outputs generated from customer project data.
Platform improvement: Glaut may use aggregated, anonymised, non-attributable patterns from platform usage to improve its agents and products. This is disclosed in our Terms and Conditions and DPA. No individual respondent data or project-specific content is used in this process.
LLM provider terms: under Glaut's enterprise agreements with OpenAI (via Azure), Google (Gemini via GCP), and Anthropic (via AWS), customer data processed through these APIs is not used to train the providers' foundation models. Customers can review these policies directly on the providers' enterprise documentation sites.
Yes. Glaut enforces several layers of data sovereignty:
Geographic restriction: all current infrastructure is located within the EU. Data does not leave EU jurisdictions in standard deployments. US-based infrastructure is available for customers specifically requiring it.
No co-mingling: customer data is segregated at the Organisation level within the platform. Data from one customer's projects is never accessible to another customer's team.
No secondary use: respondent data is processed exclusively for the research project it was collected under. Glaut does not aggregate, sell, or transfer respondent data to any third party beyond the sub-processors required to operate the platform (listed on request).
LLM provider restrictions: Glaut's enterprise API agreements with all three LLM providers explicitly prohibit those providers from training their foundation models on customer data. Customers can confirm these restrictions through Glaut's DPA and the providers' own enterprise policy documentation.
Customer contractual controls: customers can specify in their commercial agreement and DPA any additional restrictions on data processing - including topic exclusions, geographic restrictions, or requirements for enhanced data isolation.
The customer owns all outputs produced through Glaut - including interview transcripts, coded analysis, dashboard visualisations, CSV/SPSS exports, and executive reports - in their entirety.
Glaut makes no claim to intellectual property in any customer-specific output. There is no licensing restriction on how customers use, publish, share, or commercialise insights derived from their Glaut projects.
The research agency or in-house team retains full professional and legal responsibility for the outputs they deliver to their end clients, consistent with standard market research practice. Glaut's role is to support and accelerate that work, not to insert itself into the IP chain between researcher and client.
Glaut is a proud ESOMAR Corporate Member and recipient of the 2025 ESOMAR Breakthrough Research Methodology Award. We are committed to continuous transparency about our methods, our technology, and our limitations. For questions not addressed here, contact us at hello@glaut.com or visit our FAQ at glaut.com/faqs.

AI-moderated voice interviews for insights at scale
Schedule a free demo
Glaut is the only AI-native platform built for agencies and quantitative researchers.
MR firms use Glaut to add qual depth to quant surveys and deliver insights 5x deeper, 20x faster, with AI-moderated voice interviews (AIMIs) in 50+ languages.