Search results for: dimensional accuracy
1337 Iterative Method for Lung Tumor Localization in 4D CT
Authors: Sarah K. Hagi, Majdi Alnowaimi
Abstract:
In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.Keywords: automated algorithm , computed tomography, lung tumor, tumor localization
Procedia PDF Downloads 6071336 Elastic Behaviour of Graphene Nanoplatelets Reinforced Epoxy Resin Composites
Authors: V. K. Srivastava
Abstract:
Graphene has recently attracted an increasing attention in nanocomposites applications because it has 200 times greater strength than steel, making it the strongest material ever tested. Graphene, as the fundamental two-dimensional (2D) carbon structure with exceptionally high crystal and electronic quality, has emerged as a rapidly rising star in the field of material science. Graphene, as defined, as a 2D crystal, is composed of monolayers of carbon atoms arranged in a honeycombed network with six-membered rings, which is the interest of both theoretical and experimental researchers worldwide. The name comes from graphite and alkene. Graphite itself consists of many graphite-sheets stacked together by weak van der Waals forces. This is attributed to the monolayer of carbon atoms densely packed into honeycomb structure. Due to superior inherent properties of graphene nanoplatelets (GnP) over other nanofillers, GnP particles were added in epoxy resin with the variation of weight percentage. It is indicated that the DMA results of storage modulus, loss modulus and tan δ, defined as the ratio of elastic modulus and imaginary (loss) modulus versus temperature were affected with addition of GnP in the epoxy resin. In epoxy resin, damping (tan δ) is usually caused by movement of the molecular chain. The tan δ of the graphene nanoplatelets/epoxy resin composite is much lower than that of epoxy resin alone. This finding suggests that addition of graphene nanoplatelets effectively impedes movement of the molecular chain. The decrease in storage modulus can be interpreted by an increasing susceptibility to agglomeration, leading to less energy dissipation in the system under viscoelastic deformation. The results indicates the tan δ increased with the increase of temperature, which confirms that tan δ is associated with magnetic field strength. Also, the results show that the nanohardness increases with increase of elastic modulus marginally. GnP filled epoxy resin gives higher value than the epoxy resin, because GnP improves the mechanical properties of epoxy resin. Debonding of GnP is clearly observed in the micrograph having agglomeration of fillers and inhomogeneous distribution. Therefore, DMA and nanohardness studies indiacte that the elastic modulus of epoxy resin is increased with the addition of GnP fillers.Keywords: agglomeration, elastic modulus, epoxy resin, graphene nanoplatelet, loss modulus, nanohardness, storage modulus
Procedia PDF Downloads 2641335 Safe Zone: A Framework for Detecting and Preventing Drones Misuse
Authors: AlHanoof A. Alharbi, Fatima M. Alamoudi, Razan A. Albrahim, Sarah F. Alharbi, Abdullah M Almuhaideb, Norah A. Almubairik, Abdulrahman Alharby, Naya M. Nagy
Abstract:
Recently, drones received a rapid interest in different industries worldwide due to its powerful impact. However, limitations still exist in this emerging technology, especially privacy violation. These aircrafts consistently threaten the security of entities by entering restricted areas accidentally or deliberately. Therefore, this research project aims to develop drone detection and prevention mechanism to protect the restricted area. Until now, none of the solutions have met the optimal requirements of detection which are cost-effectiveness, high accuracy, long range, convenience, unaffected by noise and generalization. In terms of prevention, the existing methods are focusing on impractical solutions such as catching a drone by a larger drone, training an eagle or a gun. In addition, the practical solutions have limitations, such as the No-Fly Zone and PITBULL jammers. According to our study and analysis of previous related works, none of the solutions includes detection and prevention at the same time. The proposed solution is a combination of detection and prevention methods. To implement the detection system, a passive radar will be used to properly identify the drone against any possible flying objects. As for the prevention, jamming signals and forceful safe landing of the drone integrated together to stop the drone’s operation. We believe that applying this mechanism will limit the drone’s invasion of privacy incidents against highly restricted properties. Consequently, it effectively accelerates drones‘ usages at personal and governmental levels.Keywords: detection, drone, jamming, prevention, privacy, RF, radar, UAV
Procedia PDF Downloads 2131334 Airline Choice Model for Domestic Flights: The Role of Airline Flexibility
Authors: Camila Amin-Puello, Lina Vasco-Diaz, Juan Ramirez-Arias, Claudia Munoz, Carlos Gonzalez-Calderon
Abstract:
Operational flexibility is a fundamental aspect in the field of airlines because although demand is constantly changing, it is the duty of companies to provide a service to users that satisfies their needs in an efficient manner without sacrificing factors such as comfort, safety and other perception variables. The objective of this research is to understand the factors that describe and explain operational flexibility by implementing advanced analytical methods such as exploratory factor analysis and structural equation modeling, examining multiple levels of operational flexibility and understanding how these variable influences users' decision-making when choosing an airline and in turn how it affects the airlines themselves. The use of a hybrid model and latent variables improves the efficiency and accuracy of airline performance prediction in the unpredictable Colombian market. This pioneering study delves into traveler motivations and their impact on domestic flight demand, offering valuable insights to optimize resources and improve the overall traveler experience. Applying the methods, it was identified that low-cost airlines are not useful for flexibility, while users, especially women, found airlines with greater flexibility in terms of ticket costs and flight schedules to be more useful. All of this allows airlines to anticipate and adapt to their customers' needs efficiently: to plan flight capacity appropriately, adjust pricing strategies and improve the overall passenger experience.Keywords: hybrid choice model, airline, business travelers, domestic flights
Procedia PDF Downloads 151333 Rapid Identification and Diagnosis of the Pathogenic Leptospiras through Comparison among Culture, PCR and Real Time PCR Techniques from Samples of Human and Mouse Feces
Authors: S. Rostampour Yasouri, M. Ghane, M. Doudi
Abstract:
Leptospirosis is one of the most significant infectious and zoonotic diseases along with global spreading. This disease is causative agent of economoic losses and human fatalities in various countries, including Northern provinces of Iran. The aim of this research is to identify and compare the rapid diagnostic techniques of pathogenic leptospiras, considering the multifacetedness of the disease from a clinical manifestation and premature death of patients. In the spring and summer of 2020-2022, 25 fecal samples were collected from suspected leptospirosis patients and 25 Fecal samples from mice residing in the rice fields and factories in Tonekabon city. Samples were prepared by centrifugation and passing through membrane filters. Culture technique was used in liquid and solid EMJH media during one month of incubation at 30°C. Then, the media were examined microscopically. DNA extraction was conducted by extraction Kit. Diagnosis of leptospiras was enforced by PCR and Real time PCR (SYBR Green) techniques using lipL32 specific primer. Out of the patients, 11 samples (44%) and 8 samples (32%) were determined to be pathogenic Leptospira by Real time PCR and PCR technique, respectively. Out of the mice, 9 Samples (36%) and 3 samples (12%) were determined to be pathogenic Leptospira by the mentioned techniques, respectively. Although the culture technique is considered to be the gold standard technique, but due to the slow growth of pathogenic Leptospira and lack of colony formation of some species, it is not a fast technique. Real time PCR allowed rapid diagnosis with much higher accuracy compared to PCR because PCR could not completely identify samples with lower microbial load.Keywords: culture, pathogenic leptospiras, PCR, real time PCR
Procedia PDF Downloads 871332 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes
Authors: Nahashon Mwirigi
Abstract:
The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling
Procedia PDF Downloads 161331 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC
Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija
Abstract:
The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method
Procedia PDF Downloads 3001330 The Mediating Role of Masculine Gender Role Stress on the Relationship between the EFL learners’ Self-Disclosure and English Class Anxiety
Authors: Muhammed Kök & Adem Kantar
Abstract:
Learning a foreign language can be affected by various factors such as age, aptitude, motivation, L2 disposition, etc. Among these factors, masculine gender roles stress (MGRS) that male learners possess is the least touched area that has been examined so far.MGRS can be defined as the traditional male role stress when the male learners feel the masculinity threat against their traditionally adopted masculinity norms. Traditional masculine norms include toughness, accuracy, completeness, and faultlessness. From this perspective, these norms are diametrically opposed to the language learning process since learning a language, by its nature, involves stages such as making mistakes and errors, not recalling words, pronouncing sounds incorrectly, creating wrong sentences, etc. Considering the potential impact of MGRS on the language learning process, the main purpose of this study is to investigate the mediating role of MGRS on the relationship between the EFL learners’ self-disclosure and English class anxiety. Data were collected from Turkish EFL learners (N=282) who study different majors in various state universities across Turkey. Data were analyzed by means of the Bootstraping method using the SPSS Process Macro plugin. The findings show that the indirect effect of self-disclosure level on the English Class Anxiety via MGRS was significant. We conclude that one of the reasons why Turkish EFL learners have English class anxiety might be the pressure that they feel because of their traditional gender role stress.Keywords: masculine, gender role stress, english class anxiety, self-disclosure, masculinity norms
Procedia PDF Downloads 1001329 Development of Star Image Simulator for Star Tracker Algorithm Validation
Authors: Zoubida Mahi
Abstract:
A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.Keywords: star tracker, star simulation, star detection, centroid, noise, scenario
Procedia PDF Downloads 971328 Evaluation of 18F Fluorodeoxyglucose Positron Emission Tomography, MRI, and Ultrasound in the Assessment of Axillary Lymph Node Metastases in Patients with Early Stage Breast Cancer
Authors: Wooseok Byon, Eunyoung Kim, Junseong Kwon, Byung Joo Song, Chan Heun Park
Abstract:
Purpose: 18F Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is a noninvasive imaging modality that can identify nodal metastases in women with primary breast cancer. The aim of this study was to compare the accuracy of FDG-PET with MRI and sonography scanning to determine axillary lymph node status in patients with breast cancer undergoing sentinel lymph node biopsy or axillary lymph node dissection. Patients and Methods: Between January and December 2012, ninety-nine patients with breast cancer and clinically negative axillary nodes were evaluated. All patients underwent FDG-PET, MRI, ultrasound followed by sentinel lymph node biopsy (SLNB) or axillary lymph node dissection (ALND). Results: Using axillary lymph node assessment as the gold standard, the sensitivity and specificity of FDG-PET were 51.4% (95% CI, 41.3% to 65.6%) and 92.2% (95% CI, 82.7% to 97.4%) respectively. The sensitivity and specificity of MRI and ultrasound were 57.1% (95% CI, 39.4% to 73.7%), 67.2% (95% CI, 54.3% to 78.4%) and 42.86% (95% CI, 26.3% to 60.7%), 92.2% (95% CI, 82.7% to 97.4%). Stratification according to hormone receptor status showed an increase in specificity when negative (FDG-PET: 42.3% to 77.8%, MRI 50% to 77.8%, ultrasound 34.6% to 66.7%). Also, positive HER2 status was associated with an increase in specificity (FDG-PET: 42.9% to 85.7%, MRI 50% to 85.7%, ultrasound 35.7% to 71.4%). Conclusions: The sensitivity and specificity of FDG-PET compared with MRI and ultrasound was high. However, FDG-PET is not sufficiently accurate to appropriately identify lymph node metastases. This study suggests that FDG-PET scanning cannot replace histologic staging in early-stage breast cancer, but might have a role in evaluating axillary lymph node status in hormone receptor negative or HER-2 overexpressing subtypes.Keywords: axillary lymph node metastasis, FDG-PET, MRI, ultrasound
Procedia PDF Downloads 3761327 Comparative Study Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine
Procedia PDF Downloads 4111326 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 801325 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 2571324 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 1541323 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model
Authors: Mostafa Zandi, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function
Procedia PDF Downloads 781322 Comparison of High Speed Railway Bride Foundation Design
Authors: Hussein Yousif Aziz
Abstract:
This paper discussed the design and analysis of bridge foundation subjected to load of train with three codes, namely AASHTO code, British Standard BS Code 8004 (1986), and Chinese code (TB10002.5-2005).The study focused on the design and analysis of bridge’s foundation manually with the three codes and found which code is better for design and controls the problem of high settlement due to the applied loads. The results showed the Chinese codes are costly that the number of reinforcement bars in the pile cap and piles is more than those with AASHTO code and BS code with the same dimensions. Settlement of the bridge was calculated depending on the data collected from the project site. The vertical ultimate bearing capacity of single pile for three codes is also discussed. Other analyses by using the two-dimensional Plaxis program and other programs like SAP2000 14, PROKON many parameters are calculated. The maximum values of the vertical displacement are close to the calculated ones. The results indicate that the AASHTO code is economics and safer in the bearing capacity of single pile. The purpose of this project is to study out the pier on the basis of the design of the pile foundation. There is a 32m simply supported beam of box section on top of the structure. The pier of bridge is round-type. The main component of the design is to calculate pile foundation and the settlement. According to the related data, we choose 1.0m in diameter bored pile of 48m. The pile is laid out in the rectangular pile cap. The dimension of the cap is 12m 9 m. Because of the interaction factors of pile groups, the load-bearing capacity of simple pile must be checked, the punching resistance of pile cap, the shearing strength of pile cap, and the part in bending of pile cap, all of them are very important to the structure stability. Also, checking soft sub-bearing capacity is necessary under the pile foundation. This project provides a deeper analysis and comparison about pile foundation design schemes. Firstly, here are brief instructions of the construction situation about the Bridge. With the actual construction geological features and the upper load on the Bridge, this paper analyzes the bearing capacity and settlement of single pile. In the paper the Equivalent Pier Method is used to calculate and analyze settlements of the piles.Keywords: pile foundation, settlement, bearing capacity, civil engineering
Procedia PDF Downloads 4221321 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 141320 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution
Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras
Abstract:
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions
Procedia PDF Downloads 4091319 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator
Procedia PDF Downloads 771318 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS
Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar
Abstract:
Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.Keywords: steady flow simulation, processing programs, simulation code, inviscid flux
Procedia PDF Downloads 4291317 iCount: An Automated Swine Detection and Production Monitoring System Based on Sobel Filter and Ellipse Fitting Model
Authors: Jocelyn B. Barbosa, Angeli L. Magbaril, Mariel T. Sabanal, John Paul T. Galario, Mikka P. Baldovino
Abstract:
The use of technology has become ubiquitous in different areas of business today. With the advent of digital imaging and database technology, business owners have been motivated to integrate technology to their business operation ranging from small, medium to large enterprises. Technology has been found to have brought many benefits that can make a business grow. Hog or swine raising, for example, is a very popular enterprise in the Philippines, whose challenges in production monitoring can be addressed through technology integration. Swine production monitoring can become a tedious task as the enterprise goes larger. Specifically, problems like delayed and inconsistent reports are most likely to happen if counting of swine per pen of which building is done manually. In this study, we present iCount, which aims to ensure efficient swine detection and counting that hastens the swine production monitoring task. We develop a system that automatically detects and counts swine based on Sobel filter and ellipse fitting model, given the still photos of the group of swine captured in a pen. We improve the Sobel filter detection result through 8-neigbhorhood rule implementation. Ellipse fitting technique is then employed for proper swine detection. Furthermore, the system can generate periodic production reports and can identify the specific consumables to be served to the swine according to schedules. Experiments reveal that our algorithm provides an efficient way for detecting swine, thereby providing a significant amount of accuracy in production monitoring.Keywords: automatic swine counting, swine detection, swine production monitoring, ellipse fitting model, sobel filter
Procedia PDF Downloads 3131316 Study of Aging Behavior of Parallel-Series Connection Batteries
Authors: David Chao, John Lai, Alvin Wu, Carl Wang
Abstract:
For lithium-ion batteries with multiple cell configurations, some use scenarios can cause uneven aging effects to each cell within the battery because of uneven current distribution. Hence the focus of the study is to explore the aging effect(s) on batteries with different construction designs. In order to systematically study the influence of various factors in some key battery configurations, a detailed analysis of three key battery construction factors is conducted. And those key factors are (1) terminal position; (2) cell alignment matrix; and (3) interconnect resistance between cells. In this study, the 2S2P circuitry has been set as a model multi-cell battery to set up different battery samples, and the aging behavior is studied by a cycling test to analyze the current distribution and recoverable capacity. According to the outcome of aging tests, some key findings are: (I) different cells alignment matrices can have an impact on the cycle life of the battery; (II) symmetrical structure has been identified as a critical factor that can influence the battery cycle life, and unbalanced resistance can lead to inconsistent cell aging status; (III) the terminal position has been found to contribute to the uneven current distribution, that can cause an accelerated battery aging effect; and (IV) the internal connection resistance increase can actually result in cycle life increase; however, it is noteworthy that such increase in cycle life is accompanied by a decline in battery performance. In summary, the key findings from the study can help to identify the key aging factor of multi-cell batteries, and it can be useful to effectively improve the accuracy of battery capacity predictions.Keywords: multiple cells battery, current distribution, battery aging, cell connection
Procedia PDF Downloads 831315 Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots
Authors: Siwei Chang, Heng Li, Haitao Wu, Xin Fang
Abstract:
Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain.Keywords: construction robot, obstacle avoidance, computer vision, transparent obstacle
Procedia PDF Downloads 811314 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 891313 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues
Authors: Athena Daniilidou, Maria Platsidou
Abstract:
Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.Keywords: resilience, protective factors, teachers, item response theory
Procedia PDF Downloads 1031312 Limits of the Dot Counting Test: A Culturally Responsive Approach to Neuropsychological Evaluations and Treatment
Authors: Erin Curtis, Avraham Schwiger
Abstract:
Neuropsychological testing and evaluation is a crucial step in providing patients with effective diagnoses and treatment while in clinical care. The variety of batteries used in these evaluations can help clinicians better understand the nuanced declivities in a patient’s cognitive, behavioral, or emotional functioning, consequently equipping clinicians with the insights to make intentional choices about a patient’s care. Despite the knowledge these batteries can yield, some aspects of neuropsychological testing remain largely inaccessible to certain patient groups as a result of fundamental cultural, educational, or social differences. One such battery includes the Dot Counting Test (DCT), during which patients are required to count a series of dots on a page as rapidly and accurately as possible. As the battery progresses, the dots appear in clusters that are designed to be easily multiplied. This task evaluates a patient’s cognitive functioning, attention, and level of effort exerted on the evaluation as a whole. However, there is evidence to suggest that certain social groups, particularly Latinx groups, may perform worse on this task as a result of cultural or educational differences, not reduced cognitive functioning or effort. As such, this battery fails to account for baseline differences among patient groups, thus creating questions surrounding the accuracy, generalizability, and value of its results. Accessibility and cultural sensitivity are critical considerations in the testing and treatment of marginalized groups, yet have been largely ignored in the literature and in clinical settings to date. Implications and improvements to applications are discussed.Keywords: culture, latino, neuropsychological assessment, neuropsychology, accessibility
Procedia PDF Downloads 1151311 Exploring the Development of Inter-State Relations under the Mechanism of the Hirschman Effect: A Case Study of Malaysia-China Relations in a Political Crisis (2020-2022)
Authors: Zhao Xinlei
Abstract:
In general, interstate relations are diverse and include economic, political, military, and diplomatic. Therefore, by analyzing the development of the relationship between Malaysia and China, we can better verify how the Hirschman effect works between small countries and great powers. This paper mainly adopts qualitative research methods and uses Malaysia's 2020-2022 political crisis as a specific case to verify the practice of the Hirschman effect between small and large countries. In particular, the interest groups in small countries that are closely related to trade with extraordinary abilities, as the primary beneficiaries in the development of trade between the two countries, although they may use their resources to a certain extent to influence the decisions of small countries towards great powers, they do not fundamentally determine the small countries' response to large countries. In this process, the relative power asymmetry between states plays a dominant role, as small states lack trust and suspicion in political diplomacy towards large states based on the perception of threat arising from the relative power asymmetry. When developing bilateral relations with large countries, small states seek practical cooperation to promote economic and trade development but become more cautious in their political ties to avoid being caught in power struggles between large states or being controlled by them. The case of Malaysia-China relations also illustrates that despite the ongoing political crisis in Malaysia, which saw the country go through the transition from (Perikatan Nasional) PN to (Barisan Nasional) BN, different governments have maintained a pragmatic and proactive economic policy towards China to reduce suspicion and mistrust between the two countries in political and diplomatic affairs, thereby enhancing cooperation and interactions between the two countries. At the same time, the Malaysian government is developing multi-dimensional foreign relations and actively participating in multilateral, regional organizations and platforms, such as those organized by the United States, to maintain a relative balance in the influence of the US and China on Malaysia.Keywords: Hirschman effect, interest groups, Malaysia, China, bilateral relations
Procedia PDF Downloads 691310 Theoretical Approach for Estimating Transfer Length of Prestressing Strand in Pretensioned Concrete Members
Authors: Sun-Jin Han, Deuck Hang Lee, Hyo-Eun Joo, Hyun Kang, Kang Su Kim
Abstract:
In pretensioned concrete members, the transfer length region is existed, in which the stress in prestressing strand is developed due to the bond mechanism with surrounding concrete. The stress of strands in the transfer length zone is smaller than that in the strain plateau zone, so-called effective prestress, therefore the web-shear strength in transfer length region is smaller than that in the strain plateau zone. Although the transfer length is main key factor in the shear design, a few analytical researches have been conducted to investigate the transfer length. Therefore, in this study, a theoretical approach was used to estimate the transfer length. The bond stress developed between the strands and the surrounding concrete was quantitatively calculated by using the Thick-Walled Cylinder Model (TWCM), based on this, the transfer length of strands was calculated. To verify the proposed model, a total of 209 test results were collected from the previous studies. Consequently, the analysis results showed that the main influencing factors on the transfer length are the compressive strength of concrete, the cover thickness of concrete, the diameter of prestressing strand, and the magnitude of initial prestress. In addition, the proposed model predicted the transfer length of collected test specimens with high accuracy. Acknowledgement: This research was supported by a grant(17TBIP-C125047-01) from Technology Business Innovation Program funded by Ministry of Land, Infrastructure and Transport of Korean government.Keywords: bond, Hoyer effect, prestressed concrete, prestressing strand, transfer length
Procedia PDF Downloads 2981309 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 1261308 Numerical Solution of Steady Magnetohydrodynamic Boundary Layer Flow Due to Gyrotactic Microorganism for Williamson Nanofluid over Stretched Surface in the Presence of Exponential Internal Heat Generation
Authors: M. A. Talha, M. Osman Gani, M. Ferdows
Abstract:
This paper focuses on the study of two dimensional magnetohydrodynamic (MHD) steady incompressible viscous Williamson nanofluid with exponential internal heat generation containing gyrotactic microorganism over a stretching sheet. The governing equations and auxiliary conditions are reduced to a set of non-linear coupled differential equations with the appropriate boundary conditions using similarity transformation. The transformed equations are solved numerically through spectral relaxation method. The influences of various parameters such as Williamson parameter γ, power constant λ, Prandtl number Pr, magnetic field parameter M, Peclet number Pe, Lewis number Le, Bioconvection Lewis number Lb, Brownian motion parameter Nb, thermophoresis parameter Nt, and bioconvection constant σ are studied to obtain the momentum, heat, mass and microorganism distributions. Moment, heat, mass and gyrotactic microorganism profiles are explored through graphs and tables. We computed the heat transfer rate, mass flux rate and the density number of the motile microorganism near the surface. Our numerical results are in better agreement in comparison with existing calculations. The Residual error of our obtained solutions is determined in order to see the convergence rate against iteration. Faster convergence is achieved when internal heat generation is absent. The effect of magnetic parameter M decreases the momentum boundary layer thickness but increases the thermal boundary layer thickness. It is apparent that bioconvection Lewis number and bioconvection parameter has a pronounced effect on microorganism boundary. Increasing brownian motion parameter and Lewis number decreases the thermal boundary layer. Furthermore, magnetic field parameter and thermophoresis parameter has an induced effect on concentration profiles.Keywords: convection flow, similarity, numerical analysis, spectral method, Williamson nanofluid, internal heat generation
Procedia PDF Downloads 184