Difference of Sigmoidal¶
The Difference of Sigmoidal Membership Functions implements μ(x) = s₁(x) - s₂(x), where each s is a logistic curve with its own slope and center parameters. This creates complex membership shapes by combining two sigmoid functions.
The Difference of Sigmoidal Membership Functions is defined as:
$$\mu(x) = s_1(x) - s_2(x)$$
Where each sigmoid function is:
$$s_1(x) = \frac{1}{1 + e^{-a_1(x - c_1)}}$$ $$s_2(x) = \frac{1}{1 + e^{-a_2(x - c_2)}}$$
The function is characterized by four parameters (two for each sigmoid):
a₁: Slope parameter for the first sigmoid (s₁)
- Positive values: standard sigmoid (0 → 1 as x increases)
- Negative values: inverted sigmoid (1 → 0 as x increases)
- Larger |a₁|: steeper transition for s₁
c₁: Center parameter for the first sigmoid (s₁)
- Controls the inflection point where s₁(c₁) = 0.5
- Shifts s₁ left/right along the x-axis
a₂: Slope parameter for the second sigmoid (s₂)
- Same interpretation as a₁ but for s₂
c₂: Center parameter for the second sigmoid (s₂)
- Same interpretation as c₁ but for s₂
a₁ ≠ 0 and a₂ ≠ 0: Cannot be zero (would result in constant functions)
All parameters can be any real numbers otherwise
Partial Derivatives¶
For optimization in ANFIS networks, we need the gradients of the membership function with respect to each parameter. Since μ(x) = s₁(x) - s₂(x), the derivatives follow from the chain rule:
Derivative w.r.t. Parameters of First Sigmoid (s₁)
For s₁(x) = 1/(1 + exp(-a₁(x - c₁))):
- ∂μ/∂a₁ = ∂s₁/∂a₁ = s₁(x) · (1 - s₁(x)) · (x - c₁)
- ∂μ/∂c₁ = ∂s₁/∂c₁ = -a₁ · s₁(x) · (1 - s₁(x))
Derivative w.r.t. Parameters of Second Sigmoid (s₂)
For s₂(x) = 1/(1 + exp(-a₂(x - c₂))), and since μ(x) = s₁(x) - s₂(x):
- ∂μ/∂a₂ = -∂s₂/∂a₂ = -s₂(x) · (1 - s₂(x)) · (x - c₂)
- ∂μ/∂c₂ = -∂s₂/∂c₂ = -(-a₂ · s₂(x) · (1 - s₂(x))) = a₂ · s₂(x) · (1 - s₂(x))
Derivative w.r.t. Input (Optional)
For chaining in neural networks:
$$\frac{d\mu}{dx} = \frac{ds_1}{dx} - \frac{ds_2}{dx}$$
Where: $$\frac{ds_1}{dx} = a_1 \cdot s_1(x) \cdot (1 - s_1(x))$$ $$\frac{ds_2}{dx} = a_2 \cdot s_2(x) \cdot (1 - s_2(x))$$
Gradient Computation Details
The gradients are computed using the fundamental sigmoid derivative property: $$\frac{d}{dx}\left(\frac{1}{1+e^{-z}}\right) = \frac{1}{1+e^{-z}} \cdot \left(1 - \frac{1}{1+e^{-z}}\right) = s(x) \cdot (1 - s(x))$$
This property is used extensively in neural network backpropagation and makes the DiffSigmoidalMF computationally efficient for optimization.
Python Example¶
Let's create a difference of sigmoidal membership functions and visualize its components:
import numpy as np
import matplotlib.pyplot as plt
from anfis_toolbox.membership import DiffSigmoidalMF
diff_sigmoid = DiffSigmoidalMF(a1=2, c1=4, a2=5, c2=8)
x = np.linspace(0, 10, 100)
y = diff_sigmoid(x)
plt.plot(x, y)
plt.show()
Visualization¶
The following interactive plot shows different difference of sigmoidal membership functions with varying parameter combinations. Each subplot demonstrates how the combination of two sigmoids creates complex membership shapes.