top of page

AI POLICY 

At InnovationX, we are committed to protecting your privacy during your visits to our website and recognise our responsibility to hold your information securely and confidentially at all times. You can read more about our commitment to your privacy in the policy below.

​

  • Information gathering and usage

  • Ethical AI Principles 

  • Legal & Regulatory Compliance

  • Transparency & Explainability

  • Privacy & Data Protection

  • Data Delegation & AI Model Training 

  • Security & System Integrity

  • Bias Mitigation & Faireness

  • User Awareness & Control 

  • Accountability & Governance 

  • Continuous Improvement

  • Questions

 

INFORMATION GATHERING AND USAGE

 

We collect the e-mail addresses of those who communicate with us via e-mail, aggregate information on what pages users access or visit, and information volunteered by the user (such as survey information and/or site registrations). The information we collect is used to improve the content of our Web pages and the quality of our service and is not shared with or sold to other organisations for commercial purposes, except to provide products or services you've requested when we have your permission.

 

We collect information for the following general purposes:

  • Products and services provision

  • Identification and authentication

  • Services improvement

  • Contact

  • Research

​

ETHICAL AI PRINCIPLES 

 

InnovationX's use of AI is guided by core ethical principles drawn from top industry and regulatory frameworks (such as UK ICO's guidance and the OECD AI Principles). We are committed to: 

 

  • Our AI systems are developed to respect human rights, dignity, and autonomy. We strive to avoid any unfair bias or discrimination in AI outcomes, ensuring that all users are treated equitably

  • We openly operate AI, informing stakeholders about when and how AI is used. Wherever possible, our AI’s decision-making processes are explainable in plain language to promote understanding and trust

  • We uphold privacy by design, embedding data protection principles throughout the AI lifecycle. Personal data is handled responsibly and in compliance with data protection laws (see below), with safeguards to preserve user privacy.

  • We prioritize the security of AI systems and the safety of their outputs. Our AI is tested and monitored to prevent misuse or harm, and we implement state-of-the-art measures to ensure robustness and reliability

  • InnovationX remains accountable for its AI services. We establish governance processes to oversee AI activities and take responsibility for the outcomes produced by our AI, implementing traceability and auditability as appropriate.

​

LEGAL & REGULATORY COMPLIANCE 

​

InnovationX adheres to all applicable laws and regulations governing AI and data use, including the UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018, the EU GDPR where applicable, the California Consumer Privacy Act (CCPA) and its amendments, and any other relevant international regulations. Our AI practices are designed to meet key legal requirements in these frameworks: 

​

  • We ensure a lawful basis (such as user consent or legitimate interests) for any processing of personal data in AI. Data is used only for purposes that have been communicated to users, and not in unexpected ways that would conflict with those original purposes (in line with GDPR’s purpose limitation principle).

  • We comply with regulations regarding automated decision-making and profiling. Notably, in line with Article 22 of UK/EU GDPR, we do not subject individuals to solely automated decisions that have legal or similarly significant effects without appropriate safeguards. Such processing is only done if it is legally permitted (e.g. based on explicit user consent, or necessary for a contract). Even when we use AI to assist in decision-making, we incorporate human oversight for critical outcomes to ensure fairness and compliance.

  • We continually monitor guidance from authorities (such as the UK Information Commissioner’s Office (ICO), the European Data Protection Board, and other global regulators) to ensure our AI implementations meet the latest compliance recommendations. This includes conducting Data Protection Impact Assessments (DPIAs) for high-risk AI use cases and abiding by any reporting or assessment obligations under emerging AI regulations.

​

TRANSPARENCY & EXPLAINABILITY 

​

Transparency is fundamental to our AI approach. We believe users have a right to know when AI is involved in services or decisions that affect them. We implement the following measures: 

​

  • We inform users when content or outcomes are generated by AI. For example, if a marketing email’s content was AI-generated or if a matchmaking recommendation was derived from an algorithm, we will make this clear to the user.

  • Wherever feasible, we provide plain-language explanations of how our AI reached a given result. This may include outlining the key factors or data points the AI considered in making a recommendation or decision. Our goal is to ensure that outputs are not “black boxes” – users should be able to understand the basis of AI-driven suggestions.

  • We communicate the limitations of AI to users. AI-generated content and analyses are intended to assist, not replace, human judgment. Users are advised that while AI is a powerful tool, it may occasionally produce incorrect or irrelevant results. We encourage users to review and validate important AI outputs (such as campaign content or partner matches) and we provide guidance to interpret AI suggestions appropriately.

​

PRIVACY & DATA PROTECTION 

​

Protecting personal data is central to our AI usage. We follow strict data protection practices following GDPR, UK Law, CCPA, and other privacy regulations. 

​

  • We limit the personal data used in AI training and operations to what is necessary for the specific purpose. Our systems do not ingest more data than needed, thereby reducing exposure to sensitive information.

  • Personal data collected for one purpose (e.g., signing up for our service) is not repurposed for fundamentally different AI processing without user knowledge or consent. We do not use user data to train AI models in ways that conflict with the original purpose of collection.

  • Whenever our AI features process personal data (for example, analyzing customer behaviour for recommendations), we ensure there is a lawful basis. If required, we obtain user consent (especially for any use of sensitive data or for direct marketing profiling) and honour the right to withdraw consent. If we rely on legitimate interests, we perform assessments to confirm that our AI-driven processing does not override users’ rights.

  • We apply privacy-by-design principles. This includes pseudonymizing or anonymizing data where feasible in AI processing, implementing access controls to limit who can view personal data used by AI, and not storing personal identifiers in AI outputs unless necessary. Our training datasets are reviewed to filter out highly sensitive personal information.

  • InnovationX respects all data subject rights. Users may request access to their data, including data used by AI systems, and we will transparently provide such information. We also accommodate requests for data deletion, correction, or objection to processing as required by law. For example, if a user does not want their data to be included in AI-driven analyses or personalisation, we will exclude it upon request (in compliance with opt-out rights under laws like CCPA, and opt-out/objection rights under GDPR)

  • We do not sell personal information to third parties. Any sharing of data with third-party AI service providers (if we utilise external AI tools) is done under strict agreements that the data used is only for authorised purposes and in compliance with privacy laws. 

​

DATA DELETION AND AI MODEL TRAINING 

​

InnovationX abides by data retention principles - we retain personal data only as long as necessary for the purpose for which it was collected and to comply with our legal obligations. When a user requests deletion of their data or when data is no longer required, we remove it from our active databases and cease using it for any AI processing. 

​

However, we also acknowledge a unique challenge: once personal data has been used to train or improve an AI model, it becomes embedded in the model's parameters. Even after the original data record is deleted, the model may still indirectly retain essence, once data is used in training it is effectively irreversible, it cannot ordinarily be unlearned or extracted from the AI model. 

​

To be transparent, we include this information in our user-facing disclosures; if a user's data was used in the AI training process, we inform them that while the data can be deleted from our storage, the AI model may still retain the learned patterns. We mitigate this by minimising the use of personal data in training by regularly updating models. If feasible, we retrain models to phase out old data over time or use techniques that reduce the influence of deleted data. Additionally, we refrain from using personal data in training unless we are confident in our legal basis for doing so, given the irreversible nature of AI learning. This approach aligns with individual rights under GDPR while being honest about technical limitations. 

​

SECURITY & SYSTEM INTEGRITY

​

InnovationX takes extensive measures to ensure the security of AI systems and the data they process. We recognise that robust security and safety are crucial to prevent unauthorised access, data breaches, and malicious misuse of AI. Key security practices include:

​

  • Our AI models and infrastructure are developed following secure coding practices and are hosted in secure environments. We apply encryption to data in transit and at rest, especially for any personal data used in AI processing, and strictly control access to AI systems. Only authorized personnel can access training data, models, and outputs, and all access is logged and monitored.

  • We design AI systems to be robust and reliable under expected conditions. We conduct thorough testing (including edge-case scenarios) to ensure AI outputs remain within acceptable and safe parameters. We also have fail-safe mechanisms—if an AI system behaves unexpectedly or generates potentially harmful content, automated and human-in-the-loop safeguards can intervene to override or shut down the system safely.

  • Our team performs regular security risk assessments of all AI components. We maintain an inventory of AI systems and evaluate potential vulnerabilities or failure modes for each. We also use techniques like “model debugging” and stress-testing to identify and fix flaws in AI behaviour. In addition, continuous monitoring is in place to detect anomalies or attacks (e.g. attempts at adversarial inputs) so that we can respond swiftly.

  • The data used for AI analysis (such as customer datasets) is isolated in secure environments. We ensure that production AI systems do not inadvertently expose personal data. Any debugging or analysis on AI models using actual user data is done in controlled settings to prevent leakage of sensitive information.

  • Employees working with AI receive training on secure data handling and operational security. If we integrate third-party AI services or tools, we vet their security measures and require them to meet our security standards and contractual privacy/security commitments.

​

BIAS MITIGATION & FAIRENESS

​

Ensuring fairness and preventing bias in AI outcomes is a paramount concern. We take proactive steps to identify and mitigate bias at every stage of our AI development: 

​

  • We use diverse and representative datasets for training our AI, to the extent possible, to avoid skewed results. Our team evaluates whether the data reflects the demographics of the users and communities affected by the AI. We check that the AI’s predictions or recommendations do not systematically disadvantage any particular group.

  • We regularly perform bias and fairness audits on our AI models. This involves measuring outcomes for fairness and adjusting the model or its inputs if we detect biases. We also simulate various scenarios to see how the AI responds, ensuring that its decisions remain within ethical and legal fairness bounds.

  • Where appropriate, we enable oversight of the AI’s decision logic to help identify potential bias. Internally, we document the design of algorithms (including what factors they consider) so that our compliance team can review and explain why a given decision was made. This traceability supports accountability and the opportunity to correct any unjustified.

  • Bias mitigation is not a one-time effort. We stay updated on best practices and tools for reducing AI bias. Our developers receive training on ethical AI development, including how to recognise and address bias. If a user or stakeholder raises a concern about a potentially biased or unfair outcome, we investigate and take corrective action, using those insights to improve the system. 

​

USER AWARENESS & CONTROL 

​

InnovationX empowers by giving them knowledge and control regarding AI’s role in our services. In addition to the transparency measures described above, we provide the following assurances:

​

  • Users are informed about AI-driven features and have the choice to engage with them. We obtain consent where required (for example, if we were to send AI-personalized content beyond what users expect). Our user interface indicates when an AI recommendation or generated content is being presented.

  • Wherever feasible, we offer opt-out mechanisms for AI features. For instance, a user can choose not to receive AI-generated suggestions or can disable certain AI-based personalization in their account settings. If a user exercises their legal right to object to profiling or automated processing (under GDPR or similar laws), we will honor that choice and exclude their data from those AI processes.

  • We ensure that users are not left at the mercy of algorithms. If an individual is affected by an AI-driven decision or recommendation and desires further review, we provide a channel for inquiry or appeal. Users can request human intervention or a reevaluation of any significant AI-driven outcome.

  • We encourage users to give feedback on AI outputs. Our platform may include features like “Was this recommendation helpful?” or flags to report problematic AI-generated content. This feedback is taken seriously and fed into our improvement process. By involving users in assessing AI performance, we create a feedback loop that helps the AI become more accurate and fair over time.

​

ACCOUNTABILITY & GOVERNANCE

​

InnovationX maintains strong governance over AI activities to ensure compliance and ethical conduct. We have designated roles and committees responsible for reviewing and guiding our AI initiatives. The organisation is accountable for the impacts of its AI, we do not excuse harmful outcomes as algorithmic errors, but rather treat them as issues we must address and prevent. 

​

To support accountability, we implement traceability and documentation for our AI systems. We keep detailed records of AI model design, data sources, training methodology, and criteria used in automated decisions. This allows for internal audits and for explaining AI behaviours to regulators or users if needed. We conduct periodic audits of our AI processes to ensure ongoing adherence to this policy. 

​

All staff involved in developing or using AI are trained on this Privacy Policy and the relevant legal/ethical standards. We foster a culture of responsibility where anyone can escalate concerns about AI usage. Additionally, before deploying significant new AI features, we use an internal review process to evaluate potential risks and compliance requirements. 

​

When we employ third-party AI systems or datasets, we perform due diligence to ensure those tools meet our standards. Contracts with AI vendors include provisions requiring compliance with data protection, security and fairness criteria. We remain accountable for vendor-provided AI services as if they were developed in-house. â€‹â€‹

 

CONTINUOUS IMPROVEMENT 

​

The field of AI is rapidly evolving, and InnovationX is committed to continuously improving both our AI systems and our governance policies. We stay informed about advancements in AI ethics, technology and regulations, including forthcoming laws and updated guidelines from bodies like the ICO, OECD and other global organisations. This policy will be reviewed regularly and updated as needed to reflect new best practices or legal requirements. Significant changes to the policy will be communicated openly. 

​

Likewise, we continuously monitor and refine our AI models to improve their performance and trustworthiness. By analysing outcomes and incorporating user feedback, we aim to reduce errors and bias on an ongoing basis. Our commitment to responsible AI is an active process we learn from incidents, audit findings, and emerging research to make our AI more fair, transparent, and effective over time. In doing so, we ensure InnovationX's use of AI remains aligned with our ethical values and the expectations of our users and regulators. 

 

QUESTIONS

 

If you have any questions about our privacy policy or data collected by us then please drop an email to success@innovationxuk.com

bottom of page