I’m so happy to see this! Yesterday, the ISO published a new standard, ISO/IEC 42001:2023 for AI Management Systems. My suspicion is that it will become as important to the AI world as ISO/IEC 27001 arguably became the most important standard for information security management systems. The standard provides a comprehensive framework for establishing, implementing, maintaining, and improving an artificial intelligence management system within organisations. It aims to ensure responsible AI development, deployment, and use, addressing ethical implications, data quality, and risk management. This set of guidelines is designed to integrate AI management with organisational processes, focusing on risk management and offering detailed implementation controls. Key aspects of the standard include performance measurement, emphasising both quantitative and qualitative outcomes, and the importance of AI systems’ effectiveness in achieving intended results. It mandates conformity to requirements and systematic audits to assess AI systems. The standard also highlights the need for thorough assessment of AI's impact on society and individuals, stressing data quality to meet organisational needs. Organisations are required to document controls for AI systems and rationalise their decisions, underscoring the role of governance in ensuring performance and conformance. The standard calls for adapting management systems to include AI-specific considerations like ethical use, transparency, and accountability. It also requires continuous performance evaluation and improvement, ensuring AI systems' benefits and safety. ISO/IEC 42001:2023 aligns closely with the EU AI Act. The AI Act classifies AI systems into prohibited and high-risk categories, each with distinct compliance obligations. ISO/IEC 42001:2023's focus on ethical AI management, risk management, data quality, and transparency aligns with these categories, providing a pathway for meeting the AI Act’s requirements. The AI Act's prohibitions include specific AI systems like biometric categorisation and untargeted scraping for facial recognition. The standard may help guide organisations in identifying and discontinuing such applications. For high-risk AI systems, the AI Act mandates comprehensive risk management, registration, data governance, and transparency, which the ISO/IEC 42001:2023 framework could support. It could assist providers of high-risk AI systems in establishing risk management frameworks and maintaining operational logs, ensuring non-discriminatory, rights-respecting systems. ISO/IEC 42001:2023 may also aid users of high-risk AI systems in fulfilling obligations like human oversight and cybersecurity. It could potentially assist in managing foundation models and General Purpose AI (GPAI), necessary under the AI Act. This new standard offers a comprehensive approach to managing AI systems, aiding organisations in developing AI that respects fundamental rights and ethical standards.
AI Trends and Innovations
Explore top LinkedIn content from expert professionals.
-
-
In the worlds of data management and AI, it's time to embrace a more integrated perspective: AI needs data, and just as crucially, data needs AI. This symbiotic relationship underpins a shift in how we should approach our strategies, moving away from viewing AI and data management as two separate operations and starting to think about them as two sides of the same coin. In an enterprise context, AI models unlock their true potential not merely through the vast quantities of information they process, but when that information is relevant, connected, cleaned, structured, and enriched with semantics—all tasks that lie at the heart of data management expertise. Yet, the reverse is equally true: our data strategies gain direction and sophistication when guided by the insights and capabilities AI brings to the table. Moreover, AI can help you connect, clean, structure, and enrich your data with semantic metadata. The synergy between Large Language Models like ChatGPT and Knowledge Graphs is a testament to this interdependence. Knowledge Graphs have travelled through the Gartner hype cycle twice, once on the data side and once on the AI side, but really Knowledge Graphs are just one thing: a way of structuring your data that makes it ready for AI. Our business differentiation and competitive edge lie not in the data or AI capabilities alone but in our proficiency in merging the two harmoniously. The challenge of "garbage in, garbage out" transforms under this lens, reminding us that disorganised data not only hampers AI's effectiveness but also that AI can play a pivotal role in connecting and cleansing data. Knowledge Graphs sit at this nexus; they offer a disciplined way of structuring your data so that your AI models can use it, and in this sense, they are a way to bring the two worlds closer together. ⭕ Embrace Complexity: https://lnkd.in/eE7fvv3S
-
A nice review article "Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation" covers the scope of tools and approaches for how AI can support science. Some of areas the paper covers: (link in comments) 🔎 Literature search and summarization. Traditional academic search engines rely on keyword-based retrieval, but AI-powered tools such as Elicit and SciSpace enhance search efficiency with semantic analysis, summarization, and citation graph-based recommendations. These tools help researchers sift through vast scientific literature quickly and extract key insights, reducing the time required to identify relevant studies. 💡 Hypothesis generation and idea formation. AI models are being used to analyze scientific literature, extract key themes, and generate novel research hypotheses. Some approaches integrate structured knowledge graphs to ground hypotheses in existing scientific knowledge, reducing the risk of hallucinations. AI-generated hypotheses are evaluated for novelty, relevance, significance, and verifiability, with mixed results depending on domain expertise. 🧪 Scientific experimentation. AI systems are increasingly used to design experiments, execute simulations, and analyze results. Multi-agent frameworks, tree search algorithms, and iterative refinement methods help automate complex workflows. Some AI tools assist in hyperparameter tuning, experiment planning, and even code execution, accelerating the research process. 📊 Data analysis and hypothesis validation. AI-driven tools process vast datasets, identify patterns, and validate hypotheses across disciplines. Benchmarks like SciMON (NLP), TOMATO-Chem (chemistry), and LLM4BioHypoGen (medicine) provide structured datasets for AI-assisted discovery. However, issues like data biases, incomplete records, and privacy concerns remain key challenges. ✍️ Scientific content generation. LLMs help draft papers, generate abstracts, suggest citations, and create scientific figures. Tools like AutomaTikZ convert equations into LaTeX, while AI writing assistants improve clarity. Despite these benefits, risks of AI-generated misinformation, plagiarism, and loss of human creativity raise ethical concerns. 📝 Peer review process. Automated review tools analyze papers, flag inconsistencies, and verify claims. AI-based meta-review generators assist in assessing manuscript quality, potentially reducing bias and improving efficiency. However, AI struggles with nuanced judgment and may reinforce biases in training data. ⚖️ Ethical concerns. AI-assisted scientific workflows pose risks, such as bias in hypothesis generation, lack of transparency in automated experiments, and potential reinforcement of dominant research paradigms while neglecting novel ideas. There are also concerns about the overreliance on AI for critical scientific tasks, potentially compromising research integrity and human oversight.
-
13 national cyber agencies from around the world, led by #ACSC, have collaborated on a guide for secure use of a range of "AI" technologies, and it is definitely worth a read! "Engaging with Artificial Intelligence" was written with collaboration from Australian Cyber Security Centre, along with the Cybersecurity and Infrastructure Security Agency (#CISA), FBI, NSA, NCSC-UK, CCCS, NCSC-NZ, CERT NZ, BSI, INCD, NISC, NCSC-NO, CSA, and SNCC, so you would expect this to be a tome, but it's only 15 pages! It is refreshing to see that the article is not solely focused on LLMs (eg. ChatGPT), but defines Artificial Intelligence to include Machine Learning, Natural Language Processing, and Generative AI (LLMs), while acknowledging there are other sub-fields as well. The challenges identified (with actual real-world examples!) are: 🚩 Data Poisoning of an AI Model: manipulating an AI model's training data, leading to incorrect, biased, or malicious outputs 🚩 Input Manipulation Attacks: includes prompt injection and adversarial examples, where malicious inputs are used to hijack AI model outputs or cause misclassifications 🚩 Generative AI Hallucinations: generating inaccurate or factually incorrect information 🚩 Privacy and Intellectual Property Concerns: challenges in ensuring the security of sensitive data, including personal and intellectual property, within AI systems 🚩 Model Stealing Attack: creating replicas of AI models using the outputs of existing systems, raising intellectual property and privacy issues The suggested mitigations include generic (but useful!) cybersecurity advice as well as AI-specific advice: 🔐 Implement cyber security frameworks 🔐 Assess privacy and data protection impact 🔐 Enforce phishing-resistant multi-factor authentication 🔐 Manage privileged access on a need-to-know basis 🔐 Maintain backups of AI models and training data 🔐 Conduct trials for AI systems 🔐 Use secure-by-design principles and evaluate supply chains 🔐 Understand AI system limitations 🔐 Ensure qualified staff manage AI systems 🔐 Perform regular health checks and manage data drift 🔐 Implement logging and monitoring for AI systems 🔐 Develop an incident response plan for AI systems This guide is a great practical resource for users of AI systems. I would interested to know if there are any incident response plans specifically written for AI systems - are there any available from a reputable source?
-
This paper offers a comprehensive analysis of AI-driven business model innovation (BMI), identifying six key research dimensions crucial for understanding and advancing the field. 1️⃣ Triggers: Various factors trigger AI-driven BMI, including customer demand for AI-based solutions, technological advancements, data democratization, ecosystem developments, competitive pressures, regulatory compliance, and societal trends. These triggers drive companies to adopt AI to create new value propositions and enhance business model efficiency. 2️⃣ Restraints: Several barriers hinder AI implementation in business models. These include ethical concerns (such as algorithmic bias and misuse of AI), safety and security issues, legal and regulatory challenges, employee resistance, and the opaque nature of AI (the "black box" problem). These restraints can lead to hesitation or failure in fully adopting AI-driven BMI. 3️⃣ Resources and Capabilities: Successful AI-driven BMI requires extensive resources and capabilities, including a robust data strategy, skilled digital talents, adequate system infrastructure, and sufficient financial resources. These elements are essential for collecting, processing, and leveraging data to drive AI applications and business model innovations. 4️⃣ Application of AI: Implementing AI in business models involves understanding the current model, formulating an AI strategy, and selecting appropriate AI tools and technologies. Multidisciplinary teams play a crucial role in managing AI projects, ensuring effective rollout, communication, visualization, and continuous improvement of AI initiatives. 5️⃣ Implications: AI can support, enable, innovate, or disrupt business models. It enhances existing processes, redefines operations, creates new value propositions, and can lead to industry-wide transformations. The implications of AI-driven BMI are profound, offering incremental improvements, fundamental operational changes, innovative new services, and disruptive market shifts. 6️⃣ Management and Organizational Issues: Effective management is critical for driving AI initiatives and facilitating business model changes. This includes cultivating an AI-centric organizational culture, acquiring practical AI experience, rethinking governance structures, and aligning AI initiatives with company strategy. Addressing cultural deficits, fostering agility, and democratizing AI within the organization are essential for successful AI-driven BMI. ✍🏻 Philip Jorzik, Sascha P. Klein, Dominik K. Kanbach, Sascha Kraus, AI-driven business model innovation: A systematic review and research agenda, Journal of Business Research, Volume 182, 2024, 114764, ISSN 0148-2963. DOI: 10.1016/j.jbusres.2024.114764
-
The Unseen Threat: Is AI Making Our Cybersecurity Weaknesses Easier to Exploit? AI in cybersecurity is a double-edged sword. On one hand, it strengthens defenses. On the other, it could unintentionally expose vulnerabilities. Let’s break it down. The Good: - Real-time Threat Detection: AI identifies anomalies faster than human analysts. - Automated Response: Reduces time between detection and mitigation. - Behavioral Analytics: AI monitors network traffic and user behavior to spot unusual activities. The Bad: But, AI isn't just a tool for defenders. Cybercriminals are exploiting it, too: - Optimizing Attacks: Automated penetration testing makes it easier for attackers to find weaknesses. - Automated Malware Creation: AI can generate new malware variants that evade traditional defenses. - Impersonation & Phishing: AI mimics human communication, making scams more convincing. Specific Vulnerabilities AI Creates: 👉 Adversarial Attacks: Attackers manipulate data to deceive AI models. 👉 Data Poisoning: Malicious data injected into training sets compromises AI's reliability. 👉 Inference Attacks: Generative AI tools can unintentionally leak sensitive info. The Takeaway: AI is revolutionizing cybersecurity but also creating new entry points for attackers. It's vital to stay ahead with: 👉 Governance: Control over AI training data. 👉 Monitoring: Regular checks for adversarial manipulation. 👉 Security Protocols: Advanced detection for AI-driven threats. In this evolving landscape, vigilance is key. Are we doing enough to safeguard our systems?
-
The Symbiotic Relationship: AI × Data Engineering × Data Science Let's not assume that data engineering, AI, and data science aren't separate lanes. It's more like a feedback loop where each one makes the others better. 1️⃣ 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗗𝗮𝘁𝗮 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 𝗙𝘂𝗲𝗹 𝗔𝗜 & 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 Your data pipelines are the foundation. Without clean, reliable data flowing through, nothing else works. → Engineering to Science: Data Engineers build the high-quality, structured pipelines that deliver the training data. → Example: Making sure all customer records are deduplicated and financial data is validated before it hits the Data Scientist's workspace. Bad pipeline means a garbage model. 2️⃣ 𝗔𝗜 𝗣𝗼𝘄𝗲𝗿𝘀 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 AI isn't just the end product. It's becoming a tool that helps engineers build better pipelines faster. → AI to Engineering: AI tools automate the tedious, repetitive work of the data team itself. → Example: Using machine learning models to automatically detect anomalies in a production data stream or applying AI to auto-generate documentation for complex ETL jobs. 3️⃣ 𝗦𝗰𝗶𝗲𝗻𝗰𝗲-𝗗𝗿𝗶𝘃𝗲𝗻 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Data scientists aren't just consumers of data. Their insights tell engineers what actually matters and where to focus. → Science to Engineering: Data Science insights guide the optimization of data flows and storage. → Example: An analysis shows that 80% of business value comes from five specific data fields. The Data Engineer then prioritizes making those five fields near real-time, while slowing down less-critical flows to save cost. 4️⃣ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝗟𝗼𝗼𝗽 This is where it gets interesting. Once everything's connected, the system starts getting smarter on its own. → Interconnected Flow: The performance of the live AI models provides feedback directly to the Data Infrastructure. → Example: A deployed prediction model shows a specific data source is drifting in quality. The system alerts the Data Engineer to rebuild that source's validation checks, leading to a better pipeline, which leads to a better model. None of these roles shine alone, here's my 2 cents - 📍Your data pipelines only matter if someone's using the data. 📍Your AI models are only as good as the data feeding them. 📍Your data science insights are worthless if engineering can't implement them. #data #engineering #AI #datascience
-
Here's my latest published by the World Economic Forum and it seems like an extremely relevant topic after humanoid robots just ran the marathon in China and the unveiling of Google's XR glasses at TED 2025. We’re moving into the next frontier of computing: Where intelligence doesn’t just live in the cloud. It lives around us and expands into the physical world. On our faces, in our ears, in our pockets, walking next to us, and soon… maybe even working for us. This is the era of: 🔹 On-device AI: fast, hopefully private, and context-aware 🔹 Spatial computing: blending physical and digital realities and expanding computing into the physical world while enabling the spatial web. 🔹 Smartglasses & wearables: interfaces you wear, not just touch 🔹 Agentic AI systems that act, decide, and adapt 🔹 Vision-action models: the brain behind embodied AI And it’s not just about smartglasses like the ones Google XR, Meta, OpenAI or Meta are working on. It’s about the rise of Physical AI, where robots and spatially aware machines use spatial intelligence to understand and operate in the physical world. Think AI that can see, move, and collaborate in physical space... not just generate words. Our current LLMs are truly revolutionary, but vision action models will have an even bigger impact on our daily lives. This convergence of AI + XR + robotics is reshaping business: From how we access information… To how we work, care, learn, create, and connect. If you’re a founder, leader, designer, or investor...this is your moment to build and design for what’s next. It's never been easier to build And if you’re curious to go deeper, I’m working on a course with one of the top minds in AI to help leaders get ready for what’s coming. This quote in the article from Boston Consulting Group (BCG)'s Kristi Woolsey sums it up well: “AI has put us on the hunt for a new device that will let us move AI collaboration off-screen and into the world. Hands-free XR devices do that. AI is also hungry for data, and the cameras, location sensors and voice inputs of XR devices can feed that need.” This hardware shift makes AI more accessible and integrated into daily life, not just as a tool on a screen, but of use to use in the physical world. 👀 Check out the article here and feel free to share with your communities: https://lnkd.in/eSHWJdpR Would love to hear — which of these frontier tech shifts are you tracking most closely right now? #PhysicalAI #AI #SpatialComputing #OnDeviceAI #AgenticAI #Technology #TED2025 #Smartglasses #SpatialIntelligence
-
Yesterday, the National Security Agency Artificial Intelligence Security Center published the joint Cybersecurity Information Sheet Deploying AI Systems Securely in collaboration with the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre. Deploying AI securely demands a strategy that tackles AI-specific and traditional IT vulnerabilities, especially in high-risk environments like on-premises or private clouds. Authored by international security experts, the guidelines stress the need for ongoing updates and tailored mitigation strategies to meet unique organizational needs. 🔒 Secure Deployment Environment: * Establish robust IT infrastructure. * Align governance with organizational standards. * Use threat models to enhance security. 🏗️ Robust Architecture: * Protect AI-IT interfaces. * Guard against data poisoning. * Implement Zero Trust architectures. 🔧 Hardened Configurations: * Apply sandboxing and secure settings. * Regularly update hardware and software. 🛡️ Network Protection: * Anticipate breaches; focus on detection and quick response. * Use advanced cybersecurity solutions. 🔍 AI System Protection: * Regularly validate and test AI models. * Encrypt and control access to AI data. 👮 Operation and Maintenance: * Enforce strict access controls. * Continuously educate users and monitor systems. 🔄 Updates and Testing: * Conduct security audits and penetration tests. * Regularly update systems to address new threats. 🚨 Emergency Preparedness: * Develop disaster recovery plans and immutable backups. 🔐 API Security: * Secure exposed APIs with strong authentication and encryption. This framework helps reduce risks and protect sensitive data, ensuring the success and security of AI systems in a dynamic digital ecosystem. #cybersecurity #CISO #leadership
-
New Research on AI-Generated Content: Adoption, Impact, and Detection Challenges A large-scale empirical study from Stanford and the University of Washington has tracked the adoption and impact of LLM-generated writing across key domains—consumer communications, corporate documents, recruitment materials, and academic peer reviews—offering crucial insights into detection methodologies and adoption patterns through late 2024. Key findings: 📌 Widespread but uneven adoption – Higher use in consumer-facing content and recruitment materials, while corporate and government communications show more restraint. 📌 Adoption surged in early 2024 but moderated later in the year – This could reflect more sophisticated usage patterns or AI’s growing ability to mimic human writing more seamlessly. 📌 Bias in AI detection tools – Current models disproportionately flag non-native English writers, raising concerns about fairness and equity in content evaluation. 📌 Detection rates underestimate actual AI use – As AI-generated text becomes more human-like and edited, detection methodologies may significantly undercount adoption. 📌 Demographic disparities in AI adoption – Readiness to use LLM tools varies across user groups, influencing information credibility and communication diversity. As AI writing tools evolve, the implications for trust, fairness, and technological literacy are profound. How organizations adapt policies and detection methods in light of these findings will be important. #AI #LLMs #FutureOfWork #AIWriting #TechnologyEthics
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Healthcare
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development