A draft of my content organized and some thoughts there

In organizing my own content, I’m working from the start and the end chronologically and making my way toward the middle.

In starting with early posts and working forward, I can move quickly and decide if something is relevant or not relevant. This also makes it easier to decide if swaths of content are not relevant, eg. I used to write about something a lot, I know it’s not where I’m going – all that content can get searched and removed/consolidated/archived.

In also starting with new posts and working backwards, I can see what’s been most important to me over the past few months within about an hour or two.

TBH, I am naturally very scattered. I have some trouble with executive functioning and organization. So I think a lot about organizing, how to reduce cognitive load, make things more doable, and I have to really break things down into small parts because I cannot multi-task effectively at all.

Here is the table of contents for how I’m initially organizing posts for some semblance of cohesion:

Of course, this still isn’t really ready for users because it’s still about me and what I’m doing vs them and what they need, but the process has led me to more clarity on how all the things I’ve been writing about are related (or not).

I’ve been tormented by the “what’s your big idea” exercise. What’s the north star you can keep coming back to as you wander off the path? Some of the cues are things like “I felt really strongly about this when I was writing it.”

I can look at those emotionally charged posts and find patterns.

I can also look at the work I was doing in learning how to develop better content processes for experts with content. Some of the technical posts fit together quite well to that end. Then there would be a more thinking type post about what data did I really want to surface technically? Then there would be some posts about more possibilities given something I learned about, like how you can mine voice assistant text and load the nouns and verbs into a graph that can be then queried for facts.

Learning about the power of graphs led me to better understand network effects and topology. Thinking about that in the context of a person with limited resources but really rich assets – ideas and value hidden in archived piles of content, meaningful human relationships leaving footprints in the form of mentions and links and podcast interviews, and other unexpected connections.

Well, all that led me to think about why we’re so limited as individuals when we have more and better tools than ever. We literally live in the future.

I can do things with a couple clicks that cost tens of millions in R&D and development to do in some enterprise 10 years ago. And yet, I don’t do those things. I think you’re like this too.

We can only attend to what we can attend to, so that personal infrastructure needs a lot of initial care to set up.

Anyway those are thoughts that bubbled up while organizing my content.

And here is a working draft of how some of the better more at least more original thinking type content might fit together, going from today back to March.

And then going from the beginning for technical content up to now.

Thinking through content decision making tool product development

(by exploring approaches to content analysis and organization)

Planning / Designing the content recommendations service
Analyzing content
Organizing content

Why this work matters

On knowledge graphs (relevant for experts, e.g. personal KGs)
On leveraging your personal/professional/web-based networks

Solving common problems with expert content with content organization

Exploring underlying problems experts publishing lots of content have
Exploring related solutions for experts publishing lots of content

Technical side of research project related product development

  1. Early notes on visualizing a website with Neo4j
  2. Preprocessing data with Python for NLP
  3. Prepping Website Content Data for Graphing
  4. Neo4j from the command line – a walkthrough on using cypher-shell to work with a series of website data load scripts
  5. Visualizing a website with Neo4j and Screaming Frog data (Loading Screaming Frog Website Crawl into Neo4j Tutorial)
  6. Conducting a website audit with SF and Neo4j – basically a series of database queries in Cypher that can act as an SEO site audit crawl report type clone of Moz Site Crawl
    1. Prioritizing internal redirects to fix – a more specific example of a query should probably get rolled into the above guide at some point
    2. Less related, but looking at how to slice and aggregate issues by size from a SEMRush crawl report master CSV export in Google Sheets
  7. Crawl and scrape sites that require your login (ConvertKit example)
  8. Edit and write neo4j cypher load scripts for website data (Dev plan)