Double Integration of Six Functions Using Three Different Quadrature Algorithms (c) January 15, 2024, by James Pate Williams, Jr. All Applicable Rights Reserved

I utilized six functions that have two independent variables. The integration methods were a Monte Carlo probabilistic procedure, TRICUB from “A Numerical Library in C for Scientists and Engineers” by H. T. Lau, PhD, and an algorithm from the ACM publication. I translated the ACM method from FORTRAN to C. See the following online references:

https://math.libretexts.org/Bookshelves/Calculus/Vector_Calculus_(Corral)/03%3A_Multiple_Integrals/3.04%3A_Numerical_Approximation_of_Multiple_Integrals
https://tutorial.math.lamar.edu/Classes/CalcIII/IteratedIntegrals.aspx

== Menu ==

1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 1
f(x, y) = 8x + 6y
[0.000000, 1.000000] x [0.000000, 2.000000]
N Integral +-Error % Error
10 18.398218 2.687822 8.008911
100 19.546981 0.838888 2.265096
1000 19.859442 0.266101 0.702789
10000 20.152819 0.082772 0.764093
100000 20.014598 0.026273 0.072988
1000000 20.013430 0.008323 0.067150
NUMAL Integral Value and Percent Error: 22.666667 13.333333
nQuadrature Integral Value and Percent Error: 19.999968 0.000160
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 2
f(x, y) = 6 * x * y * y
[1.000000, 2.000000] x [2.000000, 4.000000]
N Integral +-Error % Error
10 6.400786 2.133333 92.380017
100 7.528110 0.951714 91.037965
1000 7.951653 0.294436 90.533746
10000 8.083377 0.094959 90.376933
100000 8.001852 0.029859 90.473986
1000000 8.010457 0.009474 90.463742
NUMAL Integral Value and Percent Error: 204.800000 143.809524
nQuadrature Integral Value and Percent Error: 167.999730 99.999679
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 3
f(x, y) = 2 * x - 4 * y * y * y
[-5.000000, 4.000000] x [0.000000, 3.000000]
N Integral +-Error % Error
10 -645.214459 239.379239 14.654172
100 -436.170436 84.138767 42.305498
1000 -471.478924 25.861533 37.635063
10000 -486.066536 8.338088 35.705485
100000 -484.301805 2.644031 35.938915
1000000 -486.527596 0.839338 35.644498
NUMAL Integral Value and Percent Error: -765.000000 1.190476
nQuadrature Integral Value and Percent Error: -755.998734 0.000167
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 4
f(x, y) = x * x * y * y + cos(pi * x) + sin(pi * y)
[-2.000000, -1.000000] x [0.000000, 1.000000]
N Integral +-Error % Error
10 1.103184 0.173977 22.003269
100 0.708005 0.063869 49.942995
1000 0.757961 0.021546 46.411062
10000 0.744889 0.006866 47.335224
100000 0.748463 0.002171 47.082576
1000000 0.746892 0.000686 47.193609
NUMAL Integral Value and Percent Error: 1.449491 2.481177
nQuadrature Integral Value and Percent Error: 1.414395 0.000160
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 5
f(x, y) = pow(2 * x + 3 * y, -2)
[0.000000, 1.000000] x [1.000000, 2.000000]
N Integral +-Error % Error
10 0.555876 0.235032 1394.669315
100 0.752953 0.323309 1924.579264
1000 0.826769 0.275474 2123.058629
10000 0.927229 0.177631 2393.180482
100000 0.995193 0.097358 2575.927399
1000000 1.291253 0.127325 3371.987619
NUMAL Integral Value and Percent Error: 0.034125 8.243058
nQuadrature Integral Value and Percent Error: 0.037190 0.000639
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:
== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option: 6
f(x, y) = x * exp(x * y)
[0.000000, 1.000000] x [-1.000000, 2.000000]
N Integral +-Error % Error
10 3.217934 1.437426 inf
100 5.562365 0.873166 inf
1000 5.325378 0.225123 inf
10000 5.497069 0.075770 inf
100000 5.362919 0.023344 inf
1000000 5.371802 0.007410 inf
NUMAL Integral Value and Percent Error: 6.162907 inf
nQuadrature Integral Value and Percent Error: 2.562339 inf
Unknown option, please try again.== Menu ==
1 8 * x + 6 * y
2 6 * x * y * y
3 2 * x - 4 * y * y * y
4 x * x * y * y + cos(pi * x) + sin(pi * y)
5 pow(2 * x + 3 * y, -2)
6 x * exp(x * y)
7 Exit
Option:

Revisiting One-dimensional Definite Integration (c) January 15, 2024, by James Pate Williams, Jr.

These tests involve two integration methods from “A Numerical Library in C for Scientists and Engineers” by H. T. Lau, PhD, and two home-grown integration algorithms: Trapezoidal Rule and Simpson’s Rule. The second of the Lau algorithms performs the best.

Integral of f(x) = 10 / (x * x) for various a and b

INTEGRAL delivers:
-5.000000e+00 0 -5.000000e+00 -2 2
-7.499999e+00 0 -7.499999e+00 -4 1
-9.499999e+00 0 -9.499999e+00 -20 0
-9.999990e+00 1 -9.999990e+00 0 0
Integral of f(x) = sin(x) from 0 to pi
Delivers: 2.000000e+00 0
Approximation of the integral
x * x * exp(x) * sin(x) from 0 to pi
53.566456 0.000000
Steps Trapezoidal Simpson Err Trap Err Simp
16 52.835728 53.554295 1.364174 0.022724
32 53.383198 53.565685 0.342135 0.001460
48 53.484917 53.566315 0.152242 0.000285
64 53.520687 53.566395 0.085464 0.000135
80 53.537106 53.566418 0.054814 0.000093
96 53.546146 53.566395 0.037936 0.000135
Exact integral 53.566467

Lau’s Merge-Sort Versus Cormen et. al. Quick-Sort (c) January 13-14, 2024, by James Pate Williams, Jr.

The merge-sort I used was from “A Numerical Library in C for Scientists and Engineers” by H. T. Lau Translated from ALGOL NUMAL. The quick-sort algorithm was from “Introduction to Algorithms” by Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest.

== Menu ==

1 Side-by-Side Tests
2 Timing Comparisons
3 Exit
Enter an option: 1
Enter a PRNG Seed >= 1: 1
0.001251 0.001251 0.001251
0.563585 0.014985 0.014985
0.193304 0.174108 0.174108
0.808740 0.193304 0.193304
0.585009 0.303995 0.303995
0.479873 0.350291 0.350291
0.350291 0.479873 0.479873
0.895962 0.513535 0.513535
0.822840 0.563585 0.563585
0.746605 0.585009 0.585009
0.174108 0.710501 0.710501
0.858943 0.746605 0.746605
0.710501 0.808740 0.808740
0.513535 0.822840 0.822840
0.303995 0.858943 0.858943
0.014985 0.895962 0.895962
== Menu ==
1 Side-by-Side Tests
2 Timing Comparisons
3 Exit
Enter an option:
== Menu ==

1 Side-by-Side Tests
2 Timing Comparisons
3 Exit
Enter an option: 2
Enter a PRNG Seed >= 1: 1
mergesort mean runtime = 0.358999
QuickSort mean runtime = 0.290000
mergesort std dev = 0.005610
QuickSort std dev = 0.004532
== Menu ==
1 Side-by-Side Tests
2 Timing Comparisons
3 Exit
Enter an option:

Persides Solution of a Well-Known Space-Time Metric and the Legendre Functions (Polynomials) of the First and Second Kind

https://www.sciencedirect.com/science/article/pii/0022247X73902771
https://en.wikipedia.org/wiki/Laplace%27s_equation
https://math.libretexts.org/Bookshelves/Differential_Equations/A_First_Course_in_Differential_Equations_for_Scientists_and_Engineers_(Herman)/04%3A_Series_Solutions/4.05%3A_Legendre_Polynomials#:~:text=Solutions%20to%20this%20equation%2C%20Pm%20n%28x%29%20and%20Qm,Legendre%20functions%20of%20the%20first%20and%20second%20kind

Fast and Slow Matrix Multiplication by James Pate Williams, Jr. © January 13, 2024, All Applicable Rights Reserved

Note: For comparison with a paper on matrix operations, we use column major order also known as column major echelon order. Column major order is used in Fortran and row major order is utilized in C computations.

Here is the supposedly slow C source code:

void ColMajorSlowMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;

    for (int j = 1; j <= n; j++)
    {
        for (int i = 1; i <= n; i++)
        {
            double t = 0;

            for (int k = 1; k <= n; k++)
                t += a[i][k] * b[k][j];

            c[i][j] = t;
        }
    }
}

And now we present the allegedly fast multiplication code:

void ColMajorFastMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;
    
    for (int j = 1; j <= n; j++)
    {
        for (int k = 1; k <= n; k++)
        {
            double t = b[k][j];

            for (int i = 1; i <= n; i++)
                c[i][j] += a[i][k] * t;
        }
    }
}

Finally, we have another algorithm for matrix multiplication:

void ColMajorSemiMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;

    for (int j = 1; j <= n; j++)
    {
        for (int i = 1; i <= n; i++)
        {
            for (int k = 1; k <= n; k++)
                c[i][j] += a[i][k] * b[k][j];
        }
    }
}

We performed the following experiments.


slow runtime in microseconds: 2736

semi runtime in microseconds: 4001
fast runtime in microseconds: 5293
n * n = 10000
equal = 10000
n * n = 10000
equal = 10000
slow runtime in microseconds: 21962
semi runtime in microseconds: 29585
fast runtime in microseconds: 28701
n * n = 40000
equal = 40000
n * n = 40000
equal = 40000
slow runtime in microseconds: 67256
semi runtime in microseconds: 101554
fast runtime in microseconds: 107969
n * n = 90000
equal = 90000
n * n = 90000
equal = 90000
slow runtime in microseconds: 183839
semi runtime in microseconds: 268616
fast runtime in microseconds: 244832
n * n = 160000
equal = 160000
n * n = 160000
equal = 160000
slow runtime in microseconds: 428263
semi runtime in microseconds: 638721
fast runtime in microseconds: 650535
n * n = 250000
equal = 250000
n * n = 250000
equal = 250000

C:\Users\james\source\repos\FastMatrixMultiplication\Debug\FastMatrixMultiplication.exe (process 29552) exited with code 0.
Press any key to close this window . . .















// FastMatrixMultiplication.cpp : This file contains the 'main' function. Program execution begins and ends there.
// https://www-users.york.ac.uk/~mijp1/teaching/grad_HPC_for_MatSci/Lecture4.pdf
// https://stackoverflow.com/questions/25483620/how-to-measure-running-time-of-specific-function-in-c-very-accurate

#include <stdlib.h>
#include <chrono>
#include <iostream>
using namespace std;

typedef chrono::high_resolution_clock Clock;

void ColMajorFastMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;
    
    for (int j = 1; j <= n; j++)
    {
        for (int k = 1; k <= n; k++)
        {
            double t = b[k][j];

            for (int i = 1; i <= n; i++)
                c[i][j] += a[i][k] * t;
        }
    }
}

void ColMajorSemiMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;

    for (int j = 1; j <= n; j++)
    {
        for (int i = 1; i <= n; i++)
        {
            for (int k = 1; k <= n; k++)
                c[i][j] += a[i][k] * b[k][j];
        }
    }
}

void ColMajorSlowMatMul(double** a, double** b, double** c, int n)
{
    for (int j = 1; j <= n; j++)
        for (int k = 1; k <= n; k++)
            c[j][k] = 0.0;

    for (int j = 1; j <= n; j++)
    {
        for (int i = 1; i <= n; i++)
        {
            double t = 0;

            for (int k = 1; k <= n; k++)
                t += a[i][k] * b[k][j];

            c[i][j] = t;
        }
    }
}

void GenerateMatrix(double** a, int n, int seed)
{
    srand(seed);

    for (int j = 1; j <= n; j++)
    {
        for (int k = 1; k <= n; k++)
        {
            a[j][k] = (double)rand() / RAND_MAX;
        }
    }
}

int main()
{
    for (int n = 100; n <= 500; n += 100)
    {
        double** a = new double* [n + 1];
        double** b = new double* [n + 1];
        double** c = new double* [n + 1];
        double** d = new double* [n + 1];
        double** e = new double* [n + 1];

        for (int j = 0; j < n + 1; j++)
        {
            a[j] = new double[n + 1];
            b[j] = new double[n + 1];
            c[j] = new double[n + 1];
            d[j] = new double[n + 1];
            e[j] = new double[n + 1];
        }

        GenerateMatrix(a, n, 1);
        GenerateMatrix(b, n, 2);

        auto clock1 = Clock::now();
        ColMajorSlowMatMul(a, b, c, n);
        auto clock2 = Clock::now();
        long long microseconds1 = chrono::duration_cast<chrono::microseconds>
            (clock2 - clock1).count();
    
        auto clock3 = Clock::now();
        ColMajorSemiMatMul(a, b, d, n);
        auto clock4 = Clock::now();
        long long microseconds2 = chrono::duration_cast<chrono::microseconds>
            (clock4 - clock3).count();

        auto clock5 = Clock::now();
        ColMajorFastMatMul(a, b, e, n);
        auto clock6 = Clock::now();
        long long microseconds3 = chrono::duration_cast<chrono::microseconds>
            (clock6 - clock5).count();

        cout << "slow runtime in microseconds: " << microseconds1 << endl;
        cout << "semi runtime in microseconds: " << microseconds2 << endl;
        cout << "fast runtime in microseconds: " << microseconds3 << endl;

        long long equal = 0;

        for (int j = 1; j <= n; j++)
            for (int k = 1; k <= n; k++)
                if (c[j][k] == d[j][k])
                    equal++;

        cout << "n * n = " << n * n << endl;
        cout << "equal = " << equal << endl;

        equal = 0;

        for (int j = 1; j <= n; j++)
            for (int k = 1; k <= n; k++)
                if (c[j][k] == e[j][k])
                    equal++;

        cout << "n * n = " << n * n << endl;
        cout << "equal = " << equal << endl;

        for (int j = 0; j < n + 1; j++)
        {
            delete[] a[j];
            delete[] b[j];
            delete[] c[j];
            delete[] d[j];
            delete[] e[j];
        }

        delete[] a;
        delete[] b;
        delete[] c;
        delete[] d;
        delete[] e;
    }
}

New Jacobi Polynomials Application January 8, 2024, by James Pate Williams, Jr.

The two primary references used to create my application were: “A Numerical Library in C for Scientists and Engineers” by H. T. Lau and the following website: https://en.wikipedia.org/wiki/Jacobi_polynomials.

Using the Jacobi parameters alpha = 0 and beta = 0, we have the Legendre polynomials for degrees 4 and 6 and their associated roots:

Using alpha = 0.5 and beta= 0.5 we obtain for degrees 4 and 6:

Electron Probability Distribution Function Etc. (c) James Pate Williams, Jr. December 2023

More than Four Dimensions Why Worry a Blog Entry by James Pate Williams, Jr. December 27, 2023

Some modern physical models of our universe require more than Einstein’s four dimensions: three spatial dimensions and one time dimension. Why do people worry about introducing more dimensions into our understanding of chemistry and physics? When Erwin Schrödinger introduced his famous quantum mechanical two-body solution of the time independent hydrogen-like atom wave equation he went four dimensions to three spatial dimensions. Later, Wolfgang Pauli espoused his famous Pauli Exclusion Principle that simply stated no two electrons (fermions) in an atomic orbital can have the same quantum spin number. Atoms live in a four-dimensional quantum number space augmented by three spatial dimensions and one time dimension.