Everything posted by gcs
Thanks. Yeah, it does make sense that you wouldn't want to do that work up front if you weren't sure it was going to be necessary. As I alluded to earlier though this is for a game framework, which is a case where you do generally know in advance exactly what features are needed. That said, even the game frameworks I've looked at (which generally do a lot of feature detection up front) don't do event normalization up front, even though it's known in advance that it will be needed. Maybe it's just that event normalization 'on demand' is so common that people tend to do it that way by default, even when doing it up front would be a viable alternative. In any case, thinks for your feedback on the issue. Obviously it's not a critical design issue in the given context, but it was just something I was curious about.
Doing it all in one place would be a 'pro' for me, architecturally at least. Performance isn't much of a concern because whether the work is done up-front or on a per-event basis, it's not likely to be a bottleneck. The reason doing it up front is appealing to me is that all other feature detection (canvas, audio, WebGL, Web Audio, and so on) is done up front, and it'd be nice if event properties could be done there also. In the given context there won't be multiple page loads (this is for a canvas-based game framework), so I don't think that will be an issue. This is actually what I'm doing currently, but as noted above I'm interested in centralizing all feature detection in one place if it's practical to do so. The main reason I'm curious about this is that I haven't seen it done this way elsewhere, even in frameworks that do all other feature detection up front, and I just wonder if there's a reason for that. In any case, any further thoughts or suggestions as to whether this method is viable (and why it doesn't appear to be used in practice) would be welcome.