Search results for: time series models
22515 Modern Work Modules in Construction Practice
Authors: Robin Becker, Nane Roetmann, Manfred Helmus
Abstract:
Construction companies lack junior staff for construction management. According to a nationwide survey of students, however, the profession lacks attractiveness. The conflict between the traditional job profile and the current desires of junior staff for contemporary and flexible working models must be resolved. Increasing flexibility is essential for the future viability of small and medium-sized enterprises. The implementation of modern work modules can help here. The following report will present the validation results of the developed work modules in construction practice.Keywords: modern construction management, construction industry, work modules, shortage of junior staff, sustainable personnel management, making construction management more attractive, working time model
Procedia PDF Downloads 8622514 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory
Authors: Fouzia Brihmat
Abstract:
Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost
Procedia PDF Downloads 8822513 Kinetic Modelling of Fermented Probiotic Beverage from Enzymatically Extracted Annona Muricata Fruit
Authors: Calister Wingang Makebe, Wilson Ambindei Agwanande, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1 as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated, and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 8122512 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 30922511 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 7222510 Prediction of Malawi Rainfall from Global Sea Surface Temperature Using a Simple Multiple Regression Model
Authors: Chisomo Patrick Kumbuyo, Katsuyuki Shimizu, Hiroshi Yasuda, Yoshinobu Kitamura
Abstract:
This study deals with a way of predicting Malawi rainfall from global sea surface temperature (SST) using a simple multiple regression model. Monthly rainfall data from nine stations in Malawi grouped into two zones on the basis of inter-station rainfall correlations were used in the study. Zone 1 consisted of Karonga and Nkhatabay stations, located in northern Malawi; and Zone 2 consisted of Bolero, located in northern Malawi; Kasungu, Dedza, Salima, located in central Malawi; Mangochi, Makoka and Ngabu stations located in southern Malawi. Links between Malawi rainfall and SST based on statistical correlations were evaluated and significant results selected as predictors for the regression models. The predictors for Zone 1 model were identified from the Atlantic, Indian and Pacific oceans while those for Zone 2 were identified from the Pacific Ocean. The correlation between the fit of predicted and observed rainfall values of the models were satisfactory with r=0.81 and 0.54 for Zone 1 and 2 respectively (significant at less than 99.99%). The results of the models are in agreement with other findings that suggest that SST anomalies in the Atlantic, Indian and Pacific oceans have an influence on the rainfall patterns of Southern Africa.Keywords: Malawi rainfall, forecast model, predictors, SST
Procedia PDF Downloads 38922509 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa
Authors: Samy A. Khalil, U. Ali Rahoma
Abstract:
The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa
Procedia PDF Downloads 9822508 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 28922507 The Effect of Students’ Social and Scholastic Background and Environmental Impact on Shaping Their Pattern of Digital Learning in Academia: A Pre- and Post-COVID Comparative View
Authors: Nitza Davidovitch, Yael Yossel-Eisenbach
Abstract:
The purpose of the study was to inquire whether there was a change in the shaping of undergraduate students’ digitally-oriented study pattern in the pre-Covid (2016-2017) versus post-Covid period (2022-2023), as affected by three factors: social background characteristics, high school, and academic background characteristics. These two-time points were cauterized by dramatic changes in teaching and learning at institutions of higher education. The data were collected via cross-sectional surveys at two-time points, in the 2016-2017 academic school year (N=443) and in the 2022-2023 school year (N=326). The questionnaire was distributed on social media and it includes questions on demographic background characteristics, previous studies in high school and present academic studies, and questions on learning and reading habits. Method of analysis: A. Statistical descriptive analysis, B. Mean comparison tests were conducted to analyze the variations in the mean score for the digitally-oriented learning pattern variable at two-time points (pre- and post-Covid) in relation to each of the independent variables. C. Analysis of variance was performed to test the main effects and the interactions. D. Applying linear regression, the research aimed to examine the combined effect of the independent variables on shaping students' digitally-oriented learning habits. The analysis includes four models. In all four models, the dependent variable is students’ perception of digitally oriented learning. The first model included social background variables; the second model included scholastic background as well. In the third model, the academic background variables were added, and the fourth model includes all the independent variables together with the variable of period (pre- and post-COVID). E. Factor analysis confirms using the principal component method with varimax rotation; the variables were constructed by a weighted mean of all the relevant statements merged to form a single variable denoting a shared content world. The research findings indicate a significant rise in students’ perceptions of digitally-oriented learning in the post-COVID period. From a gender perspective, the impact of COVID on shaping a digital learning pattern was much more significant for female students. The socioeconomic status perspective is eliminated when controlling for the period, and the student’s job is affected - more than all other variables. It may be assumed that the student’s work pattern mediates effects related to the convenience offered by digital learning regarding distance and time. The significant effect of scholastic background on shaping students’ digital learning patterns remained stable, even when controlling for all explanatory variables. The advantage that universities had over colleges in shaping a digital learning pattern in the pre-COVID period dissipated. Therefore, it can be said that after COVID, there was a change in how colleges shape students’ digital learning patterns in such a way that no institutional differences are evident with regard to shaping the digital learning pattern. The study shows that period has a significant independent effect on shaping students’ digital learning patterns when controlling for the explanatory variables.Keywords: learning pattern, COVID, socioeconomic status, digital learning
Procedia PDF Downloads 6222506 Comparison of Different in vitro Models of the Blood-Brain Barrier for Study of Toxic Effects of Engineered Nanoparticles
Authors: Samir Dekali, David Crouzier
Abstract:
Due to their new physico-chemical properties engineered nanoparticles (ENPs) are increasingly employed in numerous industrial sectors (such as electronics, textile, aerospace, cosmetics, pharmaceuticals, food industry, etc). These new physico-chemical properties can also represent a threat for the human health. Consumers can notably be exposed involuntarily by different routes such as inhalation, ingestion or through the skin. Several studies recently reported a possible biodistribution of these ENPs on the blood-brain barrier (BBB). Consequently, there is a great need for developing BBB in vitro models representative of the in vivo situation and capable of rapidly and accurately assessing ENPs toxic effects and their potential translocation through this barrier. In this study, several in vitro models established with micro-endothelial brain cell lines of different origins (bEnd.3 mouse cell line or a new human cell line) co-cultivated or not with astrocytic cells (C6 rat or C8-B4 mouse cell lines) on Transwells® were compared using different endpoints: trans-endothelial resistance, permeability of the Lucifer yellow and protein junction labeling. Impact of NIST diesel exhaust particles on BBB cell viability is also discussed.Keywords: nanoparticles, blood-brain barrier, diesel exhaust particles, toxicology
Procedia PDF Downloads 44022505 Policy Compliance in Information Security
Authors: R. Manjula, Kaustav Bagchi, Sushant Ramesh, Anush Baskaran
Abstract:
In the past century, the emergence of information technology has had a significant positive impact on human life. While companies tend to be more involved in the completion of projects, the turn of the century has seen importance being given to investment in information security policies. These policies are essential to protect important data from adversaries, and thus following these policies has become one of the most important attributes revolving around information security models. In this research, we have focussed on the factors affecting information security policy compliance in two models : The theory of planned behaviour and the integration of the social bond theory and the involvement theory into a single model. Finally, we have given a proposal of where these theories would be successful.Keywords: information technology, information security, involvement theory, policies, social bond theory
Procedia PDF Downloads 37122504 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model
Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin
Abstract:
Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.Keywords: pit lakes, mining, modeling, hydrology
Procedia PDF Downloads 15822503 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 9222502 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI
Authors: Genady Grabarnik, Serge Yaskolko
Abstract:
Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education
Procedia PDF Downloads 5822501 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions
Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak
Abstract:
In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].tKeywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics
Procedia PDF Downloads 30822500 Characterization of 3D-MRP for Analyzing of Brain Balancing Index (BBI) Pattern
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
This paper discusses on power spectral density (PSD) characteristics which are extracted from three-dimensional (3D) electroencephalogram (EEG) models. The EEG signal recording was conducted on 150 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, the values of maximum PSD were extracted as features from the model. These features are analysed using mean relative power (MRP) and different mean relative power (DMRP) technique to observe the pattern among different brain balancing indexes. The results showed that by implementing these techniques, the pattern of brain balancing indexes can be clearly observed. Some patterns are indicates between index 1 to index 5 for left frontal (LF) and right frontal (RF).Keywords: power spectral density, 3D EEG model, brain balancing, mean relative power, different mean relative power
Procedia PDF Downloads 47422499 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 23122498 New Hybrid Method to Model Extreme Rainfalls
Authors: Youness Laaroussi, Zine Elabidine Guennoun, Amine Amar
Abstract:
Modeling and forecasting dynamics of rainfall occurrences constitute one of the major topics, which have been largely treated by statisticians, hydrologists, climatologists and many other groups of scientists. In the same issue, we propose in the present paper a new hybrid method, which combines Extreme Values and fractal theories. We illustrate the use of our methodology for transformed Emberger Index series, constructed basing on data recorded in Oujda (Morocco). The index is treated at first by Peaks Over Threshold (POT) approach, to identify excess observations over an optimal threshold u. In the second step, we consider the resulting excess as a fractal object included in one dimensional space of time. We identify fractal dimension by the box counting. We discuss the prospect descriptions of rainfall data sets under Generalized Pareto Distribution, assured by Extreme Values Theory (EVT). We show that, despite of the appropriateness of return periods given by POT approach, the introduction of fractal dimension provides accurate interpretation results, which can ameliorate apprehension of rainfall occurrences.Keywords: extreme values theory, fractals dimensions, peaks Over threshold, rainfall occurrences
Procedia PDF Downloads 36122497 Impact of Climate Variation on Natural Vegetations and Human Lives in Thar Desert, Pakistan
Authors: Sujo Meghwar, Zulfqar Ali laghari, Kanji Harijan, Muhib Ali Lagari, G. M. Mastoi, Ali Mohammad Rind
Abstract:
Thar Desert is the most populous Desert of the world. Climate variation in Thar Desert has induced an increase in the magnitude of drought. The variation in climate variation has caused a decrease in natural vegetations. Some plant species are eliminated forever. We have applied the SPI (standardized precipitation index) climate model to investigate the drought induced by climate change. We have gathered the anthropogenic response through a developed questionnaire. The data was analyzed in SPSS version 18. The met-data of two meteorological station elaborated by the time series has suggested an increase in temperature from 1-2.5 centigrade, the decrease in rain fall rainfall from 5-25% and reduction in humidity from 5-12 mm in the 20th century. The anthropogenic responses indicate high impact of climate change on human life and vegetations. Triangle data, we have collected, gives a new insight into the understanding of an association between climate change, drought and human activities.Keywords: Thar desert, human impact, vegetations, temperature, rainfall, humidity
Procedia PDF Downloads 40422496 Service Life Modelling of Concrete Deterioration Due to Biogenic Sulphuric Acid (BSA) Attack-State-of-an-Art-Review
Authors: Ankur Bansal, Shashank Bishnoi
Abstract:
Degradation of Sewage pipes, sewage pumping station and Sewage treatment plants(STP) is of major concern due to difficulty in their maintenance and the high cost of replacement. Most of these systems undergo degradation due to Biogenic sulphuric acid (BSA) attack. Since most of Waste water treatment system are underground, detection of this deterioration remains hidden. This paper presents a literature review, outlining the mechanism of this attack focusing on critical parameters of BSA attack, along with available models and software to predict the deterioration due to this attack. This paper critically examines the various steps and equation in various Models of BSA degradation, detail on assumptions and working of different softwares are also highlighted in this paper. The paper also focuses on the service life design technique available through various codes and method to integrate the servile life design with BSA degradation on concrete. In the end, various methods enhancing the resistance of concrete against Biogenic sulphuric acid attack are highlighted. It may be concluded that the effective modelling for degradation phenomena may bring positive economical and environmental impacts. With current computing capabilities integrated degradation models combining the various durability aspects can bring positive change for sustainable society.Keywords: concrete degradation, modelling, service life, sulphuric acid attack
Procedia PDF Downloads 31422495 Kýklos Dimensional Geometry: Entity Specific Core Measurement System
Authors: Steven D. P Moore
Abstract:
A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).Keywords: Kyklos, geometry, measurement, celestial, dimension
Procedia PDF Downloads 16622494 Remodeling of Gut Microbiome of Pakistani Expats in China After Intermittent Fasting/Ramadan Fasting
Authors: Hafiz Arbab Sakandar
Abstract:
Time-restricted intermittent fasting (TRIF) impacts host’s physiology and health. Plenty of health benefits have been reported for TRIF in animal models. However, limited studies have been conducted on humans especially in underdeveloped economies. Here, we designed a study to investigate the impact of TRIF/Ramadan fasting (16:8) on the modulation of gut-microbiome structure, metabolic pathways, and predicted metabolites and explored the correlation among them at different time points (during and after the month of Ramadan) in Pakistani Expats living in China. We observed different trends of Shannon-Wiener index in different subjects; however, all subjects showed substantial change in bacterial diversity with the progression of TRIF. Moreover, the changes in gut microbial structure by the end of TRIF were higher vis-a-vis in the beginning, significant difference was observed among individuals. Additionally, metabolic pathways analysis revealed that amino acid, carbohydrate and energy metabolism, glycan biosynthesis metabolism of cofactors and vitamins were significantly affected by TRIF. Pyridoxamine, glutamate, citrulline, arachidonic acid, and short chain fatty acid showed substantial difference at different time points based on the predicted metabolism. In conclusion, these results contribute to further our understanding about the key relationship among, dietary intervention (TRIF), gut microbiome structure and function. The preliminary results from study demonstrate significant potential for elucidating the mechanisms underlying gut microbiome stability and enhancing the effectiveness of microbiome-tailored interventions among the Pakistani populace. Nonetheless, extensive, and rigorous large-scale research on the Pakistani population is necessary to expound on the association between diet, gut microbiome, and overall health.Keywords: gut microbiome, health, fasting, functionality
Procedia PDF Downloads 7522493 Sensitivity, Specificity and Efficiency Real-Time PCR Using SYBR Green Method to Determine Porcine and Bovine DNA Using Specific Primer Cytochrome B Gene
Authors: Ahlam Inayatullah Badrul Munir, M. Husaini A. Rahman, Mohd Sukri Hassan
Abstract:
Real-time PCR is a molecular biology technique that is currently being widely used for halal services to differentiating between porcine and bovine DNA. The useful of technique become very important for student or workers (who works in the laboratory) to learn how the technique could be run smoothly without fail. Same concept with conventional PCR, real-time PCR also needed DNA template, primer, enzyme polymerase, dNTP, and buffer. The difference is in real-time PCR, have additional component namely fluorescent dye. The most common use of fluorescent dye in real-time PCR is SYBR green. The purpose of this study was to find out how sensitive, specific and efficient real-time PCR technique was combined with SYBR green method and specific primers of CYT b. The results showed that real-time PCR technique using SYBR Green, capable of detecting porcine and bovine DNA concentrations up to 0.0001 µl/ng. The level of efficiency for both types of DNA was 91% (90-110). Not only that in specific primer CYT b bovine primer could detect only bovine DNA, and porcine primer could detect only porcine primer. So, from the study could be concluded that real-time PCR technique that was combined with specific primer CYT b and SYBR green method, was sensitive, specific and efficient to detect porcine and bovine DNA.Keywords: sensitivity, specificity, efficiency, real-time PCR, SYBR green, Cytochrome b, porcine DNA, bovine DNA
Procedia PDF Downloads 31522492 Knowledge Creation Environment in the Iranian Universities: A Case Study
Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi
Abstract:
Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities
Procedia PDF Downloads 10122491 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 12922490 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 18122489 Further Investigation of α+12C and α+16O Elastic Scattering
Authors: Sh. Hamada
Abstract:
The current work aims to study the rainbow like-structure observed in the elastic scattering of alpha particles on both 12C and 16O nuclei. We reanalyzed the experimental elastic scattering angular distributions data for α+12C and α+16O nuclear systems at different energies using both optical model and double folding potential of different interaction models such as: CDM3Y1, DDM3Y1, CDM3Y6 and BDM3Y1. Potential created by BDM3Y1 interaction model has the shallowest depth which reflects the necessity to use higher renormalization factor (Nr). Both optical model and double folding potential of different interaction models fairly reproduce the experimental data.Keywords: density distribution, double folding, elastic scattering, nuclear rainbow, optical model
Procedia PDF Downloads 23722488 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 26722487 Comparison of Mean Monthly Soil Temperature at (5 and 30 cm) Depths at Compton Experimental Site, West Midlands (UK), between 1976-2008
Authors: Aminu Mansur
Abstract:
A comparison of soil temperature at (5 and 30 cm) depths at a research site over the period (1976-2008) was analyzed. Based on the statistical analysis of the database of (12,045) days of individual soil temperature measurements in sandy-loam of the (salwick series) soils, the mean soil temperature revealed a statistically significant increase of about -1.1 to 10.9°C at 5 cm depth in 1976 compared to 2008. Similarly, soil temperature at 30 cm depth increased by -0.1 to 2.1°C in 2008 compared to 1976. Although, rapid increase in soil temperature at all depths was observed during that period, but a thorough assessment of these conditions suggested that the soil temperature at 5 cm depth are progressively increasing over time. A typical example of those increases in soil temperature was provided for agriculture where Miscanthus (elephant) plant that grows within the study area is adversely affected by the mean soil temperature increase. The study concluded that these observations contribute to the growing mass of evidence of global warming and knowledge on secular trends. Therefore, there was statistically significant increase in soil temperature at Compton Experimental Site between 1976-2008.Keywords: soil temperature, warming trend, environment science, climate and atmospheric sciences
Procedia PDF Downloads 29822486 A Machine Learning-based Study on the Estimation of the Threat Posed by Orbital Debris
Authors: Suhani Srivastava
Abstract:
This research delves into the classification of orbital debris through machine learning (ML): it will categorize the intensity of the threat orbital debris poses through multiple ML models to gain an insight into effectively estimating the danger specific orbital debris can pose to future space missions. As the space industry expands, orbital debris becomes a growing concern in Low Earth Orbit (LEO) because it can potentially obfuscate space missions due to the increased orbital debris pollution. Moreover, detecting orbital debris and identifying its characteristics has become a major concern in Space Situational Awareness (SSA), and prior methods of solely utilizing physics can become inconvenient in the face of the growing issue. Thus, this research focuses on approaching orbital debris concerns through machine learning, an efficient and more convenient alternative, in detecting the potential threat certain orbital debris pose. Our findings found that the Logistic regression machine worked the best with a 98% accuracy and this research has provided insight into the accuracies of specific machine learning models when classifying orbital debris. Our work would help provide space shuttle manufacturers with guidelines about mitigating risks, and it would help in providing Aerospace Engineers facilities to identify the kinds of protection that should be incorporated into objects traveling in the LEO through the predictions our models provide.Keywords: aerospace, orbital debris, machine learning, space, space situational awareness, nasa
Procedia PDF Downloads 21