Skip to main content
Tweeted twitter.com/StackSoftEng/status/1118756170041311232
Copy edited (e.g. ref. <https://en.wikipedia.org/wiki/Single_responsibility_principle>). Changed to sentence casing for the title.
Source Link

Working through Single Responsibility Principlethe single responsibility principle (SRP) in Python when Callscalls are Expensiveexpensive

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for devsdevelopers, not so much for userusers).
  • Single Responsibility PrincipleThe single responsibility principle (SRP) keeps code readable, is easier to test and maintain.
  • ProjectThe project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement)) 
        return self.coordinates
    
    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


How and where todo we embrace the SR principleSRP without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?

Working through Single Responsibility Principle in Python when Calls are Expensive

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough breaking down Python code has negative impact besides readability and reuse (which is a big gain for devs, not so much for user).
  • Single Responsibility Principle keeps code readable, is easier to test and maintain.
  • Project has a special kind of background where we want readable code, tests and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement)) 
        return self.coordinates
    
    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


How and where to embrace the SR principle without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?

Working through the single responsibility principle (SRP) in Python when calls are expensive

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).
  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.
  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement))
        return self.coordinates

    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?

Became Hot Network Question
edited title
Link
Robert Harvey
  • 200.7k
  • 55
  • 470
  • 683

Working through SRSingle Responsibility Principle in Python when Calls are Expensive

deleted 437 characters in body
Source Link
Robert Harvey
  • 200.7k
  • 55
  • 470
  • 683

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough breaking down Python code has negative impact besides readability and reuse (which is a big gain for devs, not so much for user).
  • Single Responsibility Principle keeps code readable, is easier to test and maintain.
  • Project has a special kind of background where we want readable code, tests and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement)) 
        return self.coordinates
    
    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


TL;DR:

How and where to embrace the SR principle without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?


Edit:

This is not a duplicate of Is micro-optimisation important when coding? as this is not optimization done "because we can", but rather because such an optimization is necessary since it is noticeable from our scope, however it conflicts with the SRP (which is helpful and I'd like to keep if possible)

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough breaking down Python code has negative impact besides readability and reuse (which is a big gain for devs, not so much for user).
  • Single Responsibility Principle keeps code readable, is easier to test and maintain.
  • Project has a special kind of background where we want readable code, tests and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement)) 
        return self.coordinates
    
    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


TL;DR:

How and where to embrace the SR principle without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?


Edit:

This is not a duplicate of Is micro-optimisation important when coding? as this is not optimization done "because we can", but rather because such an optimization is necessary since it is noticeable from our scope, however it conflicts with the SRP (which is helpful and I'd like to keep if possible)

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough breaking down Python code has negative impact besides readability and reuse (which is a big gain for devs, not so much for user).
  • Single Responsibility Principle keeps code readable, is easier to test and maintain.
  • Project has a special kind of background where we want readable code, tests and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement)) 
        return self.coordinates
    
    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.


How and where to embrace the SR principle without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?

added 435 characters in body
Source Link
lucasgcb
  • 375
  • 4
  • 12
Loading
Source Link
lucasgcb
  • 375
  • 4
  • 12
Loading