Search results for: circular metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1292

Search results for: circular metrics

392 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple

Authors: Hasan Basaran, Emre Unal

Abstract:

Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.

Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode

Procedia PDF Downloads 89
391 Emptiness Downlink and Uplink Proposal Using Space-Time Equation Interpretation

Authors: Preecha Yupapin And Somnath

Abstract:

From the emptiness, the vibration induces the fractal, and the strings are formed. From which the first elementary particle groups, known as quarks, were established. The neutrino and electron are created by them. More elementary particles and life are formed by organic and inorganic substances. The universe is constructed, from which the multi-universe has formed in the same way. universe assumes that the intense energy has escaped from the singularity cone from the multi-universes. Initially, the single mass energy is confined, from which it is disturbed by the space-time distortion. It splits into the entangled pair, where the circular motion is established. It will consider one side of the entangled pair, where the fusion energy of the strong coupling force has formed. The growth of the fusion energy has the quantum physic phenomena, where the moving of the particle along the circumference with a speed faster than light. It introduces the wave-particle duality aspect, which will be saturated at the stopping point. It will be re-run again and again without limitation, which can say that the universe has been created and expanded. The Bose-Einstein condensate (BEC) is released through the singularity by the wormhole, which will be condensed to become a mass associated with the Sun's size. It will circulate(orbit) along the Sun. the consideration of the uncertainty principle is applied, from which the breath control is followed by the uncertainty condition ∆p∆x=∆E∆t~ℏ. The flowing in-out air into a body via a nose has applied momentum and energy control respecting the movement and time, in which the target is that the distortion of space-time will have vanished. Finally, the body is clean which can go to the next procedure, where the mind can escape from the body by the speed of light. However, the borderline between contemplation to being an Arahant is a vacuum, which will be explained.

Keywords: space-time, relativity, enlightenment, emptiness

Procedia PDF Downloads 59
390 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population

Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez

Abstract:

The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.

Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability

Procedia PDF Downloads 107
389 Measuring the Influence of Functional Proximity on Environmental Urban Performance via IMM: Four Study Cases in Milan

Authors: Massimo Tadi, M. Hadi Mohammad Zadeh, Ozge Ogut

Abstract:

Although how cities’ forms are structured is studied, more efforts are needed on systemic comprehensions and evaluations of the urban morphology through quantitative metrics that are able to describe the performance of a city in relation to its formal properties. More research is required in this direction in order to better describe the urban form characteristics and their impact on the environmental performance of cities and to increase their sustainability stewardship. With the aim of developing a better understanding of the built environment’s systemic structure, the intention of this paper is to present a holistic methodology for studying the behavior of the built environment and investigate the methods for measuring the effect of urban structure to the environmental performance. This goal will be pursued through an inquiry into the morphological components of the urban systems and the complex relationships between them. Particularly, this paper focuses on proximity, referring to the proximity of different land-uses, is a concept with which Integrated Modification Methodology (IMM) explains how land-use allocation might affect the choice of mobility in neighborhoods, and especially, encourage or discourage non-motived mobility. This paper uses proximity to demonstrate that the structure attributes can quantifiably relate to the performing behavior in the city. The target is to devise a mathematical pattern from the structural elements and correlate it directly with urban performance indicators concerned with environmental sustainability. The paper presents some results of this rigorous investigation of urban proximity and its correlation with performance indicators in four different areas in the city of Milan, each of them characterized by different morphological features.

Keywords: built environment, ecology, sustainable indicators, sustainability, urban morphology

Procedia PDF Downloads 154
388 Role of Biomaterial Surface Nanotopography on Protein Unfolding and Immune Response

Authors: Rahul Madathiparambil Visalakshan, Alex Cavallaro, John Hayball, Krasimir Vasilev

Abstract:

The role of biomaterial surface nanotopograhy on fibrinogen adsorption and unfolding, and the subsequent immune response were studied. Inconsistent topography and varying chemical functionalities along with a lack of reproducibility pose a challenge in determining the specific effects of nanotopography or chemistry on proteins and cells. It is important to have a well-defined nanotopography with a homogeneous chemistry to study the real effect of nanotopography on biological systems. Therefore, we developed a technique that can produce well-defined and highly reproducible topography to identify the role of specific roughness, size, height and density with the presence of homogeneous chemical functionality. Using plasma polymerisation of oxazoline monomers and immobilized gold nanoparticles we created surfaces with an equal number density of nanoparticles of different sizes. This surface was used to study the role of surface nanotopography and the interplay of surface chemistry on proteins and immune cells. The effect of nanotopography on fibrinogen adsorption was investigated using Quartz Cristal Microbalance with Dissipation and micro BCA. The mass of fibrinogen adsorbed on the surface increased with increasing size of nano-topography. Protein structural changes up on adsorption to the nano rough surface was studied using circular dichroism spectroscopy. Fibrinogen unfolding varied depending on the specific nanotopography of the surfaces. It was revealed that the in vitro immune response to the nanotopography surfaces changed due to this protein unfolding.

Keywords: biomaterial inflammation, protein and cell responses, protein unfolding, surface nanotopography

Procedia PDF Downloads 166
387 Prototype Development of Knitted Buoyant Swimming Vest for Children

Authors: Nga-Wun Li, Chu-Po Ho, Kit-Lun Yick, Jin-Yun Zhou

Abstract:

The use of buoyant vests incorporated with swimsuits can develop children’s confidence in the water, particularly for novice swimmers. Consequently, parents intend to purchase buoyant swimming vests for the children to reduce their anxiety to water. Although the conventional buoyant swimming vests can provide the buoyant function to the wearer, their bulkiness and hardness make children feel uncomfortable and not willing to wear. This study aimed to apply inlay knitting technology to design new functional buoyant swimming vests for children. This prototype involved a shell and a buoyant knitted layer, which is the main media to provide buoyancy. Polypropylene yarn and 6.4 mm of Expandable Polyethylene (EPE) foam were fabricated in Full needle stitch with inlay knitting technology and were then linked by sewing to form the buoyant layer. The shell of the knitted buoyant vest was made of Polypropylene circular knitted fabric. The structure of knitted fabrics of the buoyant swimsuit makes them inherently stretchable, and the arrangement of the inlaid material was designed based on the body movement that can improve the ease with which the swimmer moves. Further, the shoulder seam is designed at the back to minimize the irritation of the wearer. Apart from maintaining the buoyant function to them, this prototype shows its contribution in reducing bulkiness and improving softness to the conventional buoyant swimming vest by taking the advantages of a knitted garment. The results in this study are significant to the development of the buoyant swimming vest for both the textile and the fast-growing sportswear industry.

Keywords: knitting technology, buoyancy, inlay, swimming vest, functional garment

Procedia PDF Downloads 104
386 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 118
385 Influence of Travel Time Reliability on Elderly Drivers Crash Severity

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.

Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling

Procedia PDF Downloads 480
384 Tractography Analysis of the Evolutionary Origin of Schizophrenia

Authors: Asmaa Tahiri, Mouktafi Amine

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that fundamental understanding of the underlying causes of the majority psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics; Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, evolutionary psychology, schizophrenia, brain connectivity

Procedia PDF Downloads 54
383 Design of New Sustainable Pavement Concrete: An Experimental Road

Authors: Manuel Rosales, Francisco Agrela, Julia Rosales

Abstract:

The development of concrete pavements that include recycled waste with active and predictive safety features is a possible approach to mitigate the harmful impacts of the construction industry, such as CO2 emissions and the consumption of energy and natural resources during the construction and maintenance of road infrastructure. This study establishes the basis for formulating new smart materials for concrete pavements and carrying out the in-situ implementation of an experimental road section. To this end, a comprehensive recycled pavement solution is developed that combines eco-hybrid cement made with 25% mixed recycled aggregate powder (pMRA) and biomass bottom ash powder (pBBA) and a 30% substitution of natural aggregate by MRA and BBA. This work is grouped in three lines. 1) construction materials with high rates of use of recycled material, 2) production processes with efficient consumption of natural resources and use of cleaner energies, and 3) implementation and monitoring of road section with sustainable concrete made from waste. The objective of this study is to ensure satisfactory rheology, mechanical strength, durability, and CO2 capture of pavement concrete manufactured from waste and its subsequent application in real road section as well as its monitoring to establish the optimal range of recycled material. The concrete developed during this study are aimed at the reuse of waste, promoting the circular economy. For this purpose, and after having carried out different tests in the laboratory, three mixtures were established to be applied on the experimental road.

Keywords: biomass bottom ash, construction and demolition waste, recycled concrete pavements, full-scale experimental road, monitoring

Procedia PDF Downloads 60
382 Ultrasound-Assisted Extraction of Bioactive Compounds from Cocoa Shell and Their Encapsulation in Gum Arabic and Maltodextrin: A Technology to Produce Functional Food Ingredients

Authors: Saeid Jafari, Khursheed Ahmad Sheikh, Randy W. Worobo, Kitipong Assatarakul

Abstract:

In this study, the extraction of cocoa shell powder (CSP) was optimized, and the optimized extracts were spray-dried for encapsulation purposes. Temperature (45-65 ◦C), extraction time (30–60 min), and ethanol concentration (60–100%) were the extraction parameters. The response surface methodology analysis revealed that the model was significant (p ≤ 0.05) in interactions between all variables (total phenolic compound, total flavonoid content, and antioxidant activity as measured by 2,2-Diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP assays), with a lack of fit test for the model being insignificant (p > 0.05). Temperature (55 ◦C), time (45 min), and ethanol concentration (60%) were found to be the optimal extraction conditions. For spray-drying encapsulation, some quality metrics (e.g., water solubility, water activity) were insignificant (p > 0.05). The microcapsules were found to be spherical in shape using a scanning electron microscope. Thermogravimetric and differential thermogravimetric measurements of the microcapsules revealed nearly identical results. The gum arabic + maltodextrin microcapsule (GMM) showed potential antibacterial (zone of inhibition: 11.50 mm; lower minimum inhibitory concentration: 1.50 mg/mL) and antioxidant (DPPH: 1063 mM trolox/100g dry wt.) activities (p ≤ 0.05). In conclusion, the microcapsules in this study, particularly GMM, are promising antioxidant and antibacterial agents to be fortified as functional food ingredients for the production of nutraceutical foods with health-promoting properties.

Keywords: functional foods, coco shell powder, antioxidant activity, encapsulation, extraction

Procedia PDF Downloads 49
381 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation

Authors: Alae El Fahsi

Abstract:

This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.

Keywords: smart cities, digital governance, urban planning, strategic design

Procedia PDF Downloads 50
380 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 504
379 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks

Authors: Gunasekaran Raja, Ramkumar Jayaraman

Abstract:

In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.

Keywords: cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing

Procedia PDF Downloads 254
378 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 29
377 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings

Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi

Abstract:

Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.

Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden

Procedia PDF Downloads 78
376 Predicting Emerging Agricultural Investment Opportunities: The Potential of Structural Evolution Index

Authors: Kwaku Damoah

Abstract:

The agricultural sector is characterized by continuous transformation, driven by factors such as demographic shifts, evolving consumer preferences, climate change, and migration trends. This dynamic environment presents complex challenges for key stakeholders including farmers, governments, and investors, who must navigate these changes to achieve optimal investment returns. To effectively predict market trends and uncover promising investment opportunities, a systematic, data-driven approach is essential. This paper introduces the Structural Evolution Index (SEI), a machine learning-based methodology. SEI is specifically designed to analyse long-term trends and forecast the potential of emerging agricultural products for investment. Versatile in application, it evaluates various agricultural metrics such as production, yield, trade, land use, and consumption, providing a comprehensive view of the evolution within agricultural markets. By harnessing data from the UN Food and Agricultural Organisation (FAOSTAT), this study demonstrates the SEI's capabilities through Comparative Exploratory Analysis and evaluation of international trade in agricultural products, focusing on Malaysia and Singapore. The SEI methodology reveals intricate patterns and transitions within the agricultural sector, enabling stakeholders to strategically identify and capitalize on emerging markets. This predictive framework is a powerful tool for decision-makers, offering crucial insights that help anticipate market shifts and align investments with anticipated returns.

Keywords: agricultural investment, algorithm, comparative exploratory analytics, machine learning, market trends, predictive analytics, structural evolution index

Procedia PDF Downloads 54
375 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor

Authors: Barenten Suciu

Abstract:

In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.

Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor

Procedia PDF Downloads 265
374 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 195
373 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector

Authors: Mayesha Tahsin, A.S. Mollah

Abstract:

Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.

Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient

Procedia PDF Downloads 144
372 Neuropsychology of Social Awareness: A Research Study Applied to University Students in Greece

Authors: Argyris Karapetsas, Maria Bampou, Andriani Mitropoulou

Abstract:

The aim of the present work is to study the role of brain function in social awareness processing. Mind controls all the psychosomatic functions. Mind’s functioning enables individual not only to recognize one's own self and propositional attitudes, but also to assign such attitudes to other individuals, and to consider such observed mental states in the elucidation of behavior. Participants and Methods: Twenty (n=20) undergraduate students (mean age 18 years old) were involved in this study. Students participated in a clinical assessment, being conducted in Laboratory of Neuropsychology, at University of Thessaly, in Volos, Greece. Assessment included both electrophysiological (i.e.Event Related Potentials (ERPs) esp.P300 waveform) and neuropsychological tests (Raven's Progressive Matrices (RPM) and Sally-Anne test). Results: Initial assessment’s results confirmed statistically significant differences between the males and females, as well as in score performance to the tests applied. Strong correlations emerged between prefrontal lobe functioning, RPM, Sally-Anne test and P300 latencies. Also, significant dysfunction of mind has been found, regarding its three dimensions (straight, circular and helical). At the end of the assessment, students received consultation and appropriate guidelines in order to improve their intrapersonal and interpersonal skills. Conclusions: Mind and social awareness phenomena play a vital role in human development and may act as determinants of the quality of one’s own life. Meanwhile, brain function is highly correlated with social awareness and it seems that different set of brain structures are involved in social behavior.

Keywords: brain activity, emotions, ERP's, social awareness

Procedia PDF Downloads 182
371 The Emergence of a Hexagonal Pattern in Shear-Thickening Suspension under Orbital Shaking

Authors: Li-Xin Shi, Meng-Fei Hu, Song-Chuan Zhao

Abstract:

Dense particle suspensions composed of mixtures of particles and fluid are omnipresent in natural phenomena and in industrial processes. Dense particle suspension under shear may lose its uniform state to large local density and stress fluctuations which challenge the mean-field description of the suspension system. However, it still remains largely debated and far from fully understood of the internal mechanism. Here, a dynamics of a non-Brownian suspension is explored under horizontal swirling excitations, where high-density patches appear when the excitation frequency is increased beyond a threshold. These density patches are self-assembled into a hexagonal pattern across the system with further increases in frequency. This phenomenon is underlined by the spontaneous growth of density waves (instabilities) along the flow direction, and the motion of these density waves preserves the circular path and the frequency of the oscillation. To investigate the origin of the phenomena, the constitutive relationship calibrated by independent rheological measurements is implemented into a simplified two-phase flow model. And the critical instability frequency in theory calculation matches the experimental measurements quantitatively without free parameters. By further analyzing the model, the instability is found to be closely related to the discontinuous shear thickening transition of the suspension. In addition, the long-standing density waves degenerate into random fluctuations when replacing the free surface with rigid confinement. It indicates that the shear-thickened state is intrinsically heterogeneous, and the boundary conditions are crucial for the development of local disturbance.

Keywords: dense suspension, instability, self-organization, density wave

Procedia PDF Downloads 78
370 Virtual Team Management in Companies and Organizations

Authors: Asghar Zamani, Mostafa Falahmorad

Abstract:

Virtualization is established to combine and use the unique capabilities of employees to increase productivity and agility to provide services regardless of location. Adapting to fast and continuous change and getting maximum access to human resources are reasons why virtualization is happening. The distance problem is solved by information. Flexibility is the most important feature of virtualization, and information will be the main focus of virtualized companies. In this research, we used the Covid-19 opportunity window to assess the productivity of the companies that had been going through more virtualized management before the Covid-19 in comparison with those that just started planning on developing infrastructures on virtual management after the crises of pandemic occurred. The research process includes financial (profitability and customer satisfaction) and behavioral (organizational culture and reluctance to change) metrics assessment. In addition to financial and CRM KPIs, a questionnaire is devised to assess how manager and employees’ attitude has been changing towards the migration to virtualization. The sample companies and questions are selected by asking from experts in the IT industry of Iran. In this article, the conclusion is that companies open to virtualization based on accurate strategic planning or willing to pay to train their employees for virtualization before the pandemic are more agile in adapting to change and moving forward in recession. The prospective companies in this research, not only could compensate for the short period loss from the first shock of the Covid-19, but they could also foresee new needs of their customer sooner than other competitors, resulting in the need to employ new staff for executing the emerging demands. Findings were aligned with the literature review. Results can be a wake-up call for business owners especially in developing countries to be more resilient toward modern management styles instead of continuing with traditional ones.

Keywords: virtual management, virtual organization, competitive advantage, KPI, profit

Procedia PDF Downloads 76
369 The Production of Collagen and Collagen Peptides from Nile Tilapia Skin Using Membrane Technology

Authors: M. Thuanthong, W. Youravong, N. Sirinupong

Abstract:

Nile tilapia (Oreochromis niloticus) is one of fish species cultured in Thailand with a high production volume. A lot of skin is generated during fish processing. In addition, there are many research reported that fish skin contains abundant of collagen. Thus, the use of Nile tilapia skin as collagen source can increase the benefit of industrial waste. In this study, Acid soluble collagen (ASC) was extracted at 5, 15 or 25 ˚C with 0.5 M acetic acid then the acid was removed out and collagen was concentrated by ultrafiltration-diafiltration (UFDF). The triple helix collagen from UFDF process was used as substrate to produce collagen peptides by alcalase hydrolysis in an enzymatic membrane reactor (EMR) coupling with 1 kDa molecular weight cut off (MWCO) polysulfone hollow fiber membrane. The results showed that ASC extracted at high temperature (25 ˚C) with 0.5 M acetic acid for 5 h still preserved triple helix structure. In the UFDF process, the acid removal was higher than 90 % without any effect on ASC properties, particularly triple helix structure as indicated by circular dichroism spectrum. Moreover, Collagen from UFDF was used to produce collagen peptides by EMR. In EMR, collagen was pre-hydrolyzed by alcalase for 60 min before introduced to membrane separation. The EMR operation was operated for 10 h and provided a good of protein conversion stability. The results suggested that there is a successfulness of UF in application for acid removal to produce ASC with desirable preservation of its quality. In addition, the EMR was proven to be an effective process to produce low molecular weight peptides with ACE-inhibitory activity properties.

Keywords: acid soluble collagen, ultrafiltration-diafiltration, enzymatic membrane reactor, ace-inhibitory activity

Procedia PDF Downloads 467
368 Tractography Analysis and the Evolutionary Origin of Schizophrenia

Authors: Mouktafi Amine, Tahiri Asmaa

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that a fundamental understanding of the underlying causes of the majority of psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans' higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature, the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions' connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics: Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, diffusion weighted imaging, schizophrenia, evolutionary psychology

Procedia PDF Downloads 37
367 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient

Procedia PDF Downloads 130
366 Preventing Neurodegenerative Diseases by Stabilization of Superoxide Dismutase by Natural Polyphenolic Compounds

Authors: Danish Idrees, Vijay Kumar, Samudrala Gourinath

Abstract:

Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease caused by misfolding and aggregation of Cu, Zn superoxide dismutase (SOD1). The use of small molecules has been shown to stabilize the SOD1 dimer and preventing its dissociation and aggregation. In this study, we employed molecular docking, molecular dynamics simulation and surface plasmon resonance (SPR) to study the interactions between SOD1 and natural polyphenolic compounds. In order to explore the noncovalent interaction between SOD1 and natural polyphenolic compounds, molecular docking and molecular dynamic (MD) simulations were employed to gain insights into the binding modes and free energies of SOD1-polyphenolic compounds. MM/PBSA methods were used to calculate free energies from obtained MD trajectories. The compounds, Hesperidin, Ergosterol, and Rutin showed the excellent binding affinity in micromolar range with SOD1. Ergosterol and Hesperidin have the strongest binding affinity to SOD1 and was subjected to further characterization. Biophysical experiments using Circular Dichroism and Thioflavin T fluorescence spectroscopy results show that the binding of these two compounds can stabilize SOD1 dimer and inhibit the aggregation of SOD1. Molecular simulation results also suggest that these compounds reduce the dissociation of SOD1 dimers through direct interaction with the dimer interface. This study will be helpful to develop other drug-like molecules which may have the effect to reduce the aggregation of SOD1.

Keywords: amyotrophic lateral sclerosis, molecular dynamics simulation, surface plasmon resonance, superoxide dismutase

Procedia PDF Downloads 129
365 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh

Authors: A. A. Sadia, A. Emdad, E. Hossain

Abstract:

The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.

Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application

Procedia PDF Downloads 55
364 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 74
363 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 148