Cloudsec.ai

Cloudsec.ai

Technology, Information and Internet

San Francisco, CA 17 followers

Fractional CISO services for software-first companies.

About us

We help ambitious software companies build strategic security programs, aligned with their business's unique goals and context.

Website
https://cloudsec.ai
Industry
Technology, Information and Internet
Company size
1 employee
Headquarters
San Francisco, CA
Type
Self-Owned
Founded
2023
Specialties
Information security, vCISO, Fractional CISO, Retained CISO, CISO, Security leadership, and Security Consulting

Locations

Employees at Cloudsec.ai

Updates

  • Cloudsec.ai reposted this

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    I'm really excited to share that the Cloud Security Alliance just published the paper that my co-author Laura Cristiana Voicu and I have been working on for the past several months! The paper looks at many of the security concerns related to protecting data when building systems that utilize LLMs. In particular, we cover patterns and best practices where the LLM integrates with other external information sources such as vector databases, SQL databases and external APIs while touching on more advanced use cases such as using LLMs to write dynamic code and autonomous agents. ❤️ This is my first time as the lead on a peer reviewed paper and it couldn't have happened without all the feedback and contributions from many others. In particular, Malte Højmark-Bertelsen, Erik Hajnal, Jason Garman, Damian Hasse, and Tim Michaud all shared a wealth of knowledge that made this possible ❤️ A big thank you to Josh Buker, Mark Yanalitis and Michael Roza from the CSA for helping guide us through the process every step of the way! Let me know if you're working on interesting LLM based projects and want to chat more about how to best address your security concerns!

  • View organization page for Cloudsec.ai, graphic

    17 followers

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    I'll be speaking on building secure, AI based systems at the Cloud Security Alliance's flagship Sectember.ai conference on Sept 10th in Seattle(-ish)! I'll be focusing on authorization concerns when building a systems involving vector databases, dynamically generated SQL, function calls and agents. I'm really looking forward to the agenda, it's full of top tier AI security folks like Caleb Sima, Steve Wilson, Ken Huang, and Sounil Yu (congrats on that Knostic win at Black Hat!). Special thanks to the many others who helped with the research and review - Laura Cristiana Voicu, Erik Hajnal, Malte Højmark-Bertelsen, Damian Hasse, Tim Michaud, Jason Garman, Michael Roza, Srinivas Inguva, Ravin Kumar, Walter Haydock, Adam Lundqvist, Mark Yanalitis and others. Let me know if you're planning to attend, I'd love to meet up!

    • No alternative text description for this image
  • Cloudsec.ai reposted this

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    Snowflake's incident highlights that it doesn’t matter if a breach is “the customer’s fault” when enough are simultaneously breached. Your company’s name gets dragged through the mud alongside the B word regardless of fault. Is it the customer’s fault that they didn’t enable MFA and were susceptible to credentials stolen elsewhere being used to access their data? Does it matter? A large portion of employees at Snowflake are now distracted from being able to deliver value and are stuck dealing with the fallout. Snowflake’s name is being mentioned next to the word “breach” in every tech journal and newspaper. It’s not exactly where the marketing and brand teams want to be messaging from. This was absolutely foreseeable. It was also preventable, even with the shared responsibility model. Snowflake is now paying the price for the choice to risk their own reputation by tying it to customers’ proven inability to secure individual accounts. It’s almost like the lessons from the 23 and me breach just vanished into the ether. I’m sure there was some mention of “friction” when the authentication requirements were set but there’s more to it than a binary discussion of forcing MFA or not. How can you prevent something similar from occurring to your business? 💰  Stop charging extra money for SAML and SSO - Customer account security is too critical to your own brand’s well being. Why are you paywalling it? ✅  Make MFA a requirement for accounts with broad access to data or have admin rights - requiring MFA checks only for sessions from new IP addresses would stop most credential stuffing attacks with minimal user friction. 🔑  Ensure that MFA is easy to set up and use - Passkeys aren’t phishable and are impossibly easy for users. Outside of major providers however, most sites are yet to implement them. The sooner we move away from making people pull out their phones, unlock them, open their authenticator app, find the right account and copy a number manually to their computer, the better. 👀  Check passwords as they are created against databases of known compromised credentials - A lot of the passwords used in breaches are already known to be compromised or were commonly used. This is easily preventable with checks at password creation time to a site like haveibeenpwned. 🍪  Device bound session credentials - session cookie theft from malware is a growing threat that can be mitigated by ensuring that session cookies only work in the browser they were generated from. ✈️  Bind sessions to an IP and force re-authentication when the address changes - It doesn’t take a security engineer to point out that a user always in Tacoma showing up 12 minutes later in Azerbaijan might be indicative of an account issue. Service providers holding customer data should be taking steps to minimize the risk of a similarly unenviable situation, resorting to blaming their own customers who are already suffering from an account breach.

  • View organization page for Cloudsec.ai, graphic

    17 followers

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    It’s frequently said that you should let the engineers closest to the work choose the best tools (platforms, languages, frameworks, etc.) to get work done, but is that always the case? What’s often left out of the discussion is aligning on what exactly “the best tool to get the work done” means. More specifically, I’m talking about differences between the best tool for the work needed to deliver now vs. the best tool for the work that spans the lifecycle of the output. Just because I know a given tool and it works better for me doesn’t mean it’s the right choice for the business. It might be the best tool for me because I’m very familiar with it and can deliver quickly when using it. But short term delivery speed is just one part of the overall lifecycle to be considered. Do others on the team know the tool? Is there a steep learning curve? Is the tool actively developed, supported and well known? Does it’s functionality best support our business goals in the future? It might be that all things considered, the best tool is also the engineer’s personal preference. On the other hand, what they personally want to use could come with serious long term considerations. In that case, you need to work together on an eyes-wide-open decision that considers the tradeoffs of each decision. Maybe they’ll have to learn something new, slowing their delivery speed but better setting the project up for long term success. Conversely, you could make the decision to have it delivered as quickly as possible, with the full knowledge that the next poor soul will have to learn the intricacies of Visual Basic before they can make any changes. What’s important is transparency into the factors that need to weigh into decisions and transparency into the tradeoffs that are made by each potential path.

  • Cloudsec.ai reposted this

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    Maybe it's time for companies deriving trillions of dollars in value from open source software to think about pitching in a bit more? Yes, we can point to contributions that the big tech companies make but most of that is done developing features directly benefitting themselves commercially. In aggregate, the value provided by the OSS ecosystem is a one way street. Most companies use vast amounts of open source software without giving anything material back, despite the dependence of their business model on free software's existence. Businesses making a global, concerted effort to put even fraction of a percent of the value they gain back into efforts like OpenSSF would be a great stride towards developing a more proactive stance for security of core infrastructure that every company relies on.

    • No alternative text description for this image
  • Cloudsec.ai reposted this

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    No, you don’t need to allowlist all your apps or to have a data loss prevention system to be compliant. Do you find yourself doing work because your compliance app told you to? I’ve worked with multiple customers recently who have spent time and resources on work they thought they needed to do to be compliant when in reality, it barely moved the needle and saddled them with additional burdens of tuning lists and responding to false positives. I think software that helps companies get ready for infosec audits is hugely helpful for tracking the work to be done and ensuring you’re ready for an upcoming audit. I don’t think anyone using them wishes they could go back to tracking controls in a spreadsheet. In their quest to make things easier for users, they provide general control suggestions for a given compliance objective. The problem is that many companies don’t realize these are suggestions and implement them as is, rather than define appropriate controls based on risk, business objectives and context. Don’t blindly follow what your compliance app tells you to do. You know your systems and processes better than anyone. Look at the objectives for your compliance framework and ensure that the controls that map to it are ones that make sense in the context of your business. Wondering if your security compliance program is actually aligned with your needs? Send me a DM and let’s talk!

  • Cloudsec.ai reposted this

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    Why do security programs often focus on the wrong priorities, chasing headline-grabbing threats while proven fundamentals remain neglected? We have well-established practices for managing risk effectively but many organizations still struggle with a misalignment between perceived and actual risks, often without realizing it. Examples below: 💎 Shiny objects have a special pull on those interested in technology, including security professionals. Media covers dramatic incidents, state-sponsored attacks using 0-days and vendors constantly present new acronyms to register in the Gartner Dictionary of Security Tools You Need (You do have a CSNAARP, don’t you?) This can lead teams to prioritize high-profile threats or new tools over more mundane but more significant risks, like unpatched vulnerabilities or weak authentication. 🧠 Cognitive biases significantly influence decision-making and unless your team actively accounts for their effects, they’ll affect how you consider risks. Availability bias causes overestimation of the likelihood of a class of attack if it happened recently and received significant attention. Confirmation bias leads teams to focus on confirming existing beliefs, discounting contradictory evidence. Optimism bias, the illusion of control and the sunk cost fallacy are a few more that cloud decision making if you don’t work to counter them. 🎱 Quantifying the value of preventative measures to businesses was a difficult challenge long before the complexity of technological systems was added. Traditional ROI calculations rely on assumptions which vary widely depending on the methodology. When they’re generated without extensive collaboration or not communicated well, executives won’t have confidence in the results. To overcome this, make your risk assessment processes transparent and collaborative. By enabling participation, you build trust across the org, setting the foundation for a shared understanding of the organization's security posture. 🗺️ Dry statistics don’t convey risk in terms that the human brain can easily absorb and decision makers need to understand the business value of your recommendations. Translate statistics to a compelling story that maps the risks to critical business goals. Don’t underestimate the value of stories to make numbers more real. Make sure they're grounded in real life though 👇🏼 👹  While it's crucial to communicate potential consequences, don’t rely on fear-based tactics or overhyped statistics. Using generic figures, like the average cost of a breach from some report is a quick way to show you don’t understand statistics and undermines credibility. Instead, provide realistic, specific-to-your-org assessments of the potential impacts to help drive informed decisions about risk tolerance and resource allocation. What do you see often causing a misalignment between security and reality? Looking for help with your security program? DM me or check out my blog linked above!

  • View organization page for Cloudsec.ai, graphic

    17 followers

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    Why is pentesting still a customer requirement? 🤔 It's from a legacy focus on outputs rather than outcomes. Pentests shine where they can be focused on specific areas like new or reworked high-risk services. But the type of pentests often run to meet contractual requirements frequently fail to fulfill the desired outcome of finding and closing any vulnerabilities on a platform. What can you do to drive the outcome you're looking for here? Bug bounty programs properly align incentives with the outcomes. They also provide continuous testing from a global pool of experts rather than a point in time test from a few people. You then pay for actual findings rather than spending money on hours. Maximizing the value means finding the right balance between targeted pentests alongside a robust bug bounty program that makes sense for your context: * Use pentests for focused, high-risk areas * Allocate budget towards bounties for comprehensive, ongoing testing Check out the full article! 👇 #cybersecurity #pentesting #bugbounty #softwaresecurity #infosec

    Why is pentesting still a customer requirement?

    Why is pentesting still a customer requirement?

    Nate Lee on LinkedIn

  • View organization page for Cloudsec.ai, graphic

    17 followers

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    We’re not far away from a future where interactions between semi-autonomous LLM agents will be the glue that connects disparate systems to get work done. How are you planning to enable agents in your organization? Check out the article for more details on the below 👇 Identity Management: Agents blur the lines between agent identities and the users they act on behalf of, necessitating verifiable identity mechanisms for agents, whether acting independently or for a user. Containment: The risk of compromised agents highlights the need for least privilege principles and segmented permissions. Non-Determinism: The unpredictability of agents demands adaptive security analytics and human oversight. Upskilling employees: Without comprehension of how these systems operate, identifying risks, effectively mitigating them and recognizing potential value-add opportunities becomes significantly harder. Explainability and Accountability: Tracing the decisions and actions becomes more challenging and increasingly critical. This may require developing newer mechanisms to capture the nuanced and potentially complex context of agent decisions. Reassessing Priorities: Security teams must evolve with AI technologies, advocating for security-by-design and building an adaptive, culture focused on resilience. Proactive Planning: Future-proofing against AI-related security challenges involves threat modeling, exploring AI-specific security tools, and engaging with the security and AI communities. #ai #llmsecurity #llm #security #infosecurity #infosec #ciso #fractionalexecutive

    LLM agents are coming, are you ready?

    LLM agents are coming, are you ready?

    Nate Lee on LinkedIn

  • View organization page for Cloudsec.ai, graphic

    17 followers

    View profile for Nate Lee, graphic

    CISO - AI, security and risk - Helping ambitious software companies develop technically focused, business aligned security strategies.

    The combination of vulnerabilities against LLM based applications will garner much more attention in the future. The OWASP LLM Top 10 list is a great guide to familiarize developers with the most likely vectors (pun?!) for attack. What’s especially interesting about LLM security is the way many practical attacks involve multiple items from the list. Especially with multi-modal models, there’s a ton of interesting ways that apps could be compromised involving combinations of the major classes of vulnerabilities. Prompt injection via images with text hidden in the pixels leaking PII, training data poisoning causing misleading output for an internal tool that users trust and rely on, upstream models and datasets taking advantage of the excessive permissions granted to an internal agent in order to leak sensitive data. Many attacks will take advantage of the lack of preventative controls at multiple layers. Being aware of these combinations requires system level thinking and understanding of the end to end processes involved with building an LLM based app, from training to implementation to practical usage of the end product. While the same can be said of common web app vulnerabilities, it’s often more straightforward to think about how to prevent XSS, SQLi, CSRF, etc. and there’s certainly much more mature and defined patterns and best practices to help out there. This highlights the need to be hands on and curious with the tools to connect the dots and protect the next generation of systems. With the power available from LLM apps like Copilot and ChatGPT to help write the foundational code needed to stand up basic applications to test functionality, there’s never been a better time to dive in and get your hands dirty, even for those who don’t have deep programming experience. #ai #security #llm #llmsecurity #appsec

    • No alternative text description for this image

Similar pages