6 comments

  • rao-v 1 hour ago
    I don’t really think this reflects the current era of challenges?

    The “enforcement layer” is the hardest and most important part, and is barely addressed.

    - is the answer structurally / syntactically valid?

    - is it appropriately grounded and evidenced?

    - is it accurate? In what ways does it fall short?

    Each of these should be triggering an agent to rework and resubmit etc. or failing that a disclosure to the user about how the answer falls short and should be reviewed / remediated.

    This feels like it’s from the era of trying to oneshot a good enough answer.

  • newsdeskx 29 minutes ago
    enforcement is the hard part. most context engineering stuff describes what should happen, not what actually stops it from happening. curious how your enforcement layer handles runtime checks vs just descriptive ones
  • slashdave 2 hours ago
    > the information an AI system needs to produce accurate ... outputs

    I would have stuck a qualifier in there

  • r4ge 2 hours ago
    I feel like AI is going to be doing all the fun stuff and I will just left organizing the data and docs it needs to generate code.
  • tmpz22 2 hours ago
    Putting engineering after a term doesnt make it engineering.
    • jryio 1 hour ago
      Software engineering is certainly not engineering. Even at the highest levels. Real engineering have infinitely more complex interactions in the physical world than symbolic institutions for machines.
    • slashdave 2 hours ago
      Probably just using the convention started by the term "prompt engineering", which is forgivable.
      • sroussey 2 hours ago
        not sure i forgive "prompt engineering"
  • agent-kay 8 minutes ago
    [dead]