Some new posts

This commit is contained in:
2024-10-03 20:59:01 -04:00
parent 7b1899a010
commit 9f533ffd1a
7 changed files with 126 additions and 285 deletions

View File

@@ -24,18 +24,14 @@ know about is AWS CloudWatch Metric Filters. If you're already on AWS
then you should consider these because it requires only that your then you should consider these because it requires only that your
application logs to CloudWatch. application logs to CloudWatch.
If you're on ECS then the If you're on ECS then the [[https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html][awslogs]] log driver for Docker gets you that
[[https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html][awslogs]] nearly for free. By "free" I mean that your application itself can
log driver for Docker gets you that nearly for free. By "free" I mean have /zero/ dependencies on AWS services and not require any AWS
that your application itself can have /zero/ dependencies on AWS credentials or libraries to start pumping out metrics that you can
services and not require any AWS credentials or libraries to start visualize, alert on and record over time.
pumping out metrics that you can visualize, alert on and record over
time.
The The [[https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html][AWS docs]] themselves offer the canonical reference for configuring
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html][AWS these so I won't go into detail here.
docs]] themselves offer the canonical reference for configuring these so
I won't go into detail here.
However, the gist is that for a log filter you define the following However, the gist is that for a log filter you define the following
properties properties
@@ -47,13 +43,8 @@ properties
- And finally a log group to extract the metric values from - And finally a log group to extract the metric values from
After that you just run the application and as the logs roll in the After that you just run the application and as the logs roll in the
metric values get pumped out. Then you can metric values get pumped out. Then you can [[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create-alarm-on-metric-math-expression.html][define alarms for alerting]]
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create-alarm-on-metric-math-expression.html][define on them, [[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html][graph them]], [[https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#policy-creating-alarm-console][define autoscaling rules]] from them and more.
alarms for alerting]] on them,
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html][graph
them]],
[[https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#policy-creating-alarm-console][define
autoscaling rules]] from them and more.
To conclude - AWS is big and hairy. While there are benefits to staying To conclude - AWS is big and hairy. While there are benefits to staying
platform agnostic, some AWS services don't require much or any coupling platform agnostic, some AWS services don't require much or any coupling

15
big-companies.org Normal file
View File

@@ -0,0 +1,15 @@
:PROPERTIES:
#+SETUPFILE: setup.org
:END:
Organizing people is a difficult problem which only gets more difficult as youmore people need to be organized.
The larger a company is the more of its internal structures, rules, policies, history, etc are devoted _just_ to organizing people.
For me, realizing this was like the first time you hear a flourescent light buzzing in an otherwise quiet room.
Reasonable people can differ on this point, but for my own sake I'd much rather avoid all the people-organizing baggage that comes with large companies.
I don't have a hard-and-fast rule about the size of a place I want to work but the larger a place is then generally the more reason I need to want to be there.
Of course, this is all kind of theoretical at this point, as [[https://flipstone.com][Flipstone]] is my forever home.

View File

@@ -5,7 +5,8 @@
:END: :END:
* TODO [[file:let-people-fail.org][Let people fail]] * TODO [[file:let-people-fail.org][Let people fail]]
* TODO [[file:job-description.org][My job description]] * [[file:big-companies][The problem with large organizations]]
* TODO [[file:job-description.org][Just what is it you do here?]]
* TODO [[file:managing-expectations.org][Managing Expectations]] * TODO [[file:managing-expectations.org][Managing Expectations]]
* [[file:https-at-home.org][HTTPS @ Homelab]] * [[file:https-at-home.org][HTTPS @ Homelab]]
* [[file:multi-room-audio.org][Multi-room audio setup]] * [[file:multi-room-audio.org][Multi-room audio setup]]

View File

@@ -1,6 +1,10 @@
:PROPERTIES: :PROPERTIES:
#+SETUPFILE: setup.org #+SETUPFILE: setup.org
:END: :END:
** Just what is it I do here? ** Just what is it you do here?
For my own sake, I think it's clarifying for me ot describe just what my job is. I've never liked working at [[file:big-companies.org][larger companies]].
But that doesn't _always_ simplify things.
Sometimes at smaller companies you end up defining your own job, to some degree.

259
posts.org
View File

@@ -1,259 +0,0 @@
:PROPERTIES:
#+SETUPFILE: setup.org
:END:
* Homelab :homelab:
** HTTPS @ Home
:PROPERTIES:
#+keywords: homelab
#+export_file_name: https-at-home
#+subtitle:
:END:
I run a lot of services at home.
This includes, but isn't limited to
- [[https://archivebox.io/][ArchiveBox]]
- [[https://github.com/dani-garcia/vaultwarden][VaultWarden]]
- [[https://github.com/navidrome/navidrome][Navidrome]]
- [[https://plex.tv][Plex]]
- [[https://github.com/LibrePhotos/librephotos][LibrePhotos]]
- This blog
and a lot more.
Pretty much anything that's served up over HTTP is always nice if not
necessary to have behind TLS.
[[https://letsencrypt.org/][LetsEncrypt]] long ago brought free certs to
the masses and there are a lot of tools for automating that nowadays.
My preferred approach for getting all the unnecessary nonsense I
self-host at home behind TLS is [[https://caddyserver.com][Caddy]].
I have a super straight forward setup, generally:
- Run Caddy in a docker container
- Create a wildcard CNAME record in my DNS pointing at my home's
(effectively) static IP
- Add an entry in my Caddyfile for each services I'm running at home on
its own subdomain
- If it's a service then I add it with a =reverse_proxy= block
- If it's a static site (like this) then there's a block for
- If it's something I want only accessible on my home network then I put
a block like
#+BEGIN_EXAMPLE
@local_network {
path *
remote_ip
}
#+END_EXAMPLE
in the directive. And voila.
Then tell Caddy to reload the config and I'm done.
** My multiroom audio setup
:PROPERTIES:
#+keywords: homelab snapcast audio
#+export_file_name: home-multiroom-audio
#+subtitle:
:END:
I've put my home audio solution together out of the following
components.
- [[https://github.com/badaix/snapcast][Snapcast]]
- [[https://www.musicpd.org/][MPD]]
- [[https://github.com/librespot-org/librespot][Librespot]]
- [[https://github.com/mikebrady/shairport-sync][Shairport-sync]]
- A mini-PC in my closet running the above software
- Two Raspberry Pi 4s
- Four Raspberry Pi Zero Ws
- Some desktop speakers and some Bluetooth speakers (wired to the Pis)
Each of the Raspberry Pis is in a room or porch attached to a speaker.
Snapcast lets me take an audio source and synchronize it across multiple
clients. Each of the Raspberry Pis are running a =snapclient= instance
and play whatever the =snapserver= instance tells them to.
Snapcast is setup to send whichever of the streams (MPD, Spotify,
Shairport-sync/AirPlay) is playing audio to each of the clients that are
connected to it.
This lets me or anyone else on my WiFi network play directly on one or
more of the speakers - each named for the room that they're in using
either Spotify, AirPlay, picking from my own music collection or by
pointing at a URL (like to a podcast episode).
This works out great and we've used it at home for the past year.
I'd like to get the podcast experience to a more seamless place but it's
pretty OK right now using AirMusic on my phone to play audio to the
speakers over AirPlay.
* Tooling :tooling:
** vi modal editing in most places
:PROPERTIES:
#+keywords: vim
#+export_file_name: vi-editing-everywhere
#+subtitle:
:END:
For my sake, I prefer to have Vim bindings in as many places as
possible.
Most shells can be configured to use Vim bindings by putting =set -o vi=
somewhere in your shell startup script.
If you're using ZSH then you'll probably want an additional binding to
restore CTRL-R reverse history search.
=bindkey '^R' history-incremental-search-backward=
For CLI tools that use the =readline= library then you can configure its
input mode using a =.inputrc= file in your =$HOME= directory.
This affects REPLs like =ghci= and tools like =psql=.
#+begin_src txt
set editing-mode vi
$if mode=vi
set keymap vi-command
# these are for vi-command mode
Control-l: clear-screen
set keymap vi-insert
# these are for vi-insert mode
Control-l: clear-screen
$endif
#+end_src
* AWS :aws:
** Structed and passively collected metrics via AWS CloudWatch
:PROPERTIES:
#+keywords: aws
#+export_file_name: aws-cloudwatch-metric-filters
#+subtitle:
:END:
AWS is a vast and sprawling set of services. It can be hard to find the
hidden gems like this one so I wanted to point this one out.
Structured metrics are very helpful to monitoring the health and
function of an software system.
- Do you want to know how long a particular transaction typically takes?
- How fast your database queries are?
- How long external APIs take to respond?
- Fire an alert when a particular function on the site happens too many
times? Or too few times?
...plus a million other things specific to whatever system you're
working on.
There are a lot of great tools for doing this and one that you might not
know about is AWS CloudWatch Metric Filters. If you're already on AWS
then you should consider these because it requires only that your
application logs to CloudWatch.
If you're on ECS then the
[[https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html][awslogs]]
log driver for Docker gets you that nearly for free. By "free" I mean
that your application itself can have /zero/ dependencies on AWS
services and not require any AWS credentials or libraries to start
pumping out metrics that you can visualize, alert on and record over
time.
The
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html][AWS
docs]] themselves offer the canonical reference for configuring these so
I won't go into detail here.
However, the gist is that for a log filter you define the following
properties
- A filter pattern for extracting a discrete metric value out of a log
entry
- A metric name to store the value in
- An optional dimension for sub-classifying the value
- And finally a log group to extract the metric values from
After that you just run the application and as the logs roll in the
metric values get pumped out. Then you can
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create-alarm-on-metric-math-expression.html][define
alarms for alerting]] on them,
[[https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html][graph
them]],
[[https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html#policy-creating-alarm-console][define
autoscaling rules]] from them and more.
To conclude - AWS is big and hairy. While there are benefits to staying
platform agnostic, some AWS services don't require much or any coupling
of your application code to take advantage of. Cloudwatch Metrics is one
of those services and you can get a lot of value out of it with not much
effort.
* Musings :musings:
** TODO My job description
** TODO Managing expectations
** TODO Just let people be wrong
:PROPERTIES:
#+keywords: advice relationships people
#+export_file_name: let-people-fail
#+subtitle:
:END:
Warning: This, like most things, will involve a fair bit of projection.
I have some thoughts about collaboration.
While a lot of this is obvious and well accepted, I think there are some fine points worth elaborating on.
The obvious part is that people work better together when they believe they are trusted. Trust breeds initiative and independence. Distrust breeds resentment and inaction.
Consider the flip side of trust, for a moment.
A common way that people show _distrust_ when collaborating is either micromanaging or just coming in behind someone and redoing their work.
If that demonstrates distrust then
It's not enough that you simply _do_ trust someone else to get the benefits, you need to show it. I think this is the part that many people skip or ignore.
This is, of course, true in general.
** Very simple CSS frameworks
:PROPERTIES:
#+keywords: CSS
#+export_file_name: css-frameworks
#+subtitle:
:END:
*** Minimal CSS / fancy resets
I really like simple drop-in CSS resets like the one I use for this site.
At the time of writing, I'm using [[https://picocss.com/][Pico]] but I also considered [[https://yegor256.github.io/tacit/][tacit]]
The idea is that they provide nice default styling of HTML elements out of the box without the need to reference any specific classes.
The idea works well for sites that are much more content than layout - like this one.
Using tacit is a matter of incluing this link tag in the page's HEAD element:
#+BEGIN_SRC html
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css">
#+END_SRC
#+BEGIN_SRC haskell
doThings :: String -> IO ()
doThings name = do
putStrLn name
#+END_SRC

View File

@@ -2,17 +2,19 @@
#+author: James Brechtel #+author: James Brechtel
#+email: me@jamesbrechtel.com #+email: me@jamesbrechtel.com
#+bind: org-export-publishing-directory "./public" #+bind: org-export-publishing-directory "./public"
#+html_head_x: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kimeiga/bahunya/dist/bahunya.min.css"> #+html_headx: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kimeiga/bahunya/dist/bahunya.min.css">
#+html_headxx: <link rel="stylesheet" href="//writ.cmcenroe.me/1.0.4/writ.min.css" type="text/css">
#+html_head_bahunya: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kimeiga/bahunya/dist/bahunya.min.css">
#+html_head: <link rel="stylesheet" href="https://unpkg.com/awsm.css/dist/awsm.min.css" type="text/css"> #+html_head: <link rel="stylesheet" href="https://unpkg.com/awsm.css/dist/awsm.min.css" type="text/css">
#+html_head_writ: <link rel="stylesheet" href="//writ.cmcenroe.me/1.0.4/writ.min.css" type="text/css">
#+html_head_bahunya: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kimeiga/bahunya/dist/bahunya.min.css">
#+html_head_awsm: <link rel="stylesheet" href="https://unpkg.com/awsm.css/dist/awsm.min.css" type="text/css">
#+html_head_simple: <link rel="stylesheet" href="https://cdn.simplecss.org/simple.min.css"> #+html_head_simple: <link rel="stylesheet" href="https://cdn.simplecss.org/simple.min.css">
#+html_head_holiday: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/holiday.css@0.11.2" /> #+html_head_holiday: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/holiday.css@0.11.2" />
#+html_head_mvp: <link rel="stylesheet" href="https://unpkg.com/mvp.css"> #+html_head_mvp: <link rel="stylesheet" href="https://unpkg.com/mvp.css">
#+html_head_pico: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css"> #+html_head_pico: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css">
#+html_head: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css">
#+html_head_tacit: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/yegor256/tacit@gh-pages/tacit-css-1.8.1.min.css"/> #+html_head_tacit: <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/yegor256/tacit@gh-pages/tacit-css-1.8.1.min.css"/>
#+options: html-link-use-abs-url:nil html-postamble:nil #+options: html-link-use-abs-url:nil html-postamble:nil
#+options: html-preamble:t html-scripts:nil html-style:t #+options: html-preamble:t html-scripts:nil html-style:nil
#+options: html5-fancy:t tex:t #+options: html5-fancy:t tex:t
#+options: author:t broken-links:mark c:nil creator:nil f:t tasks:t toc:nil todo:t #+options: author:t broken-links:mark c:nil creator:nil f:t tasks:t toc:nil todo:t
#+OPTIONS: num:nil #+OPTIONS: num:nil

View File

@@ -4,6 +4,8 @@
#+subtitle: #+subtitle:
:END: :END:
** vi modal editing in most places ** vi modal editing in most places
#+HTML: <section>
#+HTML: <article>
For my sake, I prefer to have Vim bindings in as many places as For my sake, I prefer to have Vim bindings in as many places as
possible. possible.
@@ -19,6 +21,8 @@ For CLI tools that use the =readline= library then you can configure its
input mode using a =.inputrc= file in your =$HOME= directory. input mode using a =.inputrc= file in your =$HOME= directory.
This affects REPLs like =ghci= and tools like =psql=. This affects REPLs like =ghci= and tools like =psql=.
#+HTML: </article>
#+HTML: </section>
#+begin_src txt #+begin_src txt
set editing-mode vi set editing-mode vi
@@ -34,3 +38,86 @@ Control-l: clear-screen
$endif $endif
#+end_src #+end_src
#+BEGIN_SRC haskell
module Data.Validation.Aeson where
import Control.Monad.Identity
import Data.Aeson
import qualified Data.Aeson.Key as Key
import qualified Data.Aeson.KeyMap as KeyMap
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as LazyBS
import qualified Data.Map.Strict as Map
import qualified Data.Set as Set
import qualified Data.Text as Text
import qualified Data.Vector as Vec
import Data.Validation.Types
decodeValidJSON :: Validator Value a -> LazyBS.ByteString -> ValidationResult a
decodeValidJSON validator input =
runIdentity (decodeValidJSONT (liftV validator) input)
decodeValidJSONStrict :: Validator Value a -> BS.ByteString -> ValidationResult a
decodeValidJSONStrict validator input =
runIdentity (decodeValidJSONStrictT (liftV validator) input)
decodeValidJSONT ::
Applicative m =>
ValidatorT Value m a ->
LazyBS.ByteString ->
m (ValidationResult a)
decodeValidJSONT validator input =
case eitherDecode input of
Left err -> pure $ Invalid (errMessage $ Text.pack err)
Right value -> runValidatorT validator (value :: Value)
decodeValidJSONStrictT ::
Applicative m =>
ValidatorT Value m a ->
BS.ByteString ->
m (ValidationResult a)
decodeValidJSONStrictT validator input =
case eitherDecodeStrict input of
Left err -> pure $ Invalid (errMessage $ Text.pack err)
Right value -> runValidatorT validator (value :: Value)
instance Validatable Value where
inputText (String text) = Just text
inputText _ = Nothing
inputNull Null = IsNull
inputNull _ = NotNull
inputBool (Bool True) = Just True
inputBool (Bool False) = Just False
inputBool _ = Nothing
arrayItems (Array items) = Just items
arrayItems _ = Nothing
scientificNumber (Number sci) = Just sci
scientificNumber _ = Nothing
lookupChild attrName (Object hmap) =
LookupResult $
KeyMap.lookup (Key.fromText attrName) hmap
lookupChild _ _ = InvalidLookup
instance ToJSON Errors where
toJSON (Messages set) =
Array
. Vec.fromList
. map toJSON
. Set.toList
$ set
toJSON (Group attrs) =
Object
. KeyMap.fromList
. Map.toList
. Map.mapKeys Key.fromText
. Map.map toJSON
$ attrs
#+END_SRC