Search results for: raw complex data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28304

Search results for: raw complex data

28064 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory

Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad

Abstract:

Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.

Keywords: GAN, long short-term memory, synthetic data generation, traffic management

Procedia PDF Downloads 13
28063 Biodistribution Study of 68GA-PDTMP as a New Bone Pet Imaging Agent

Authors: N. Tadayon, H. Yousefnia, S. Zolghadri, A. Ramazani, A. R. Jalilian

Abstract:

In this study, 68Ga-PDTMP was prepared as a new agent for bone imaging. 68Ga was obtained from SnO2 based generator. A certain volume of the PDTMP solution was added to the vial containing 68GaCl3 and the pH of the mixture was adjusted to 4 using HEPES. Radiochemical purity of the radiolabelled complex was checked by thin layer chromatography. Biodistribution of this new agent was assessed in rats after intravenously injection of the complex. For this purpose, the rats were killed at specified times after injection and the weight and activity of each organ was measured. Injected dose per gram was calculated by dividing the activity of each organ to the total injected activity and the mass of each organ. As expected the most of the activity was accumulated in the bone tissue. The radiolabelled compound was extracted from blood very fast. This new bone-seeking complex can be considered as a good candidate of PET-based radiopharmaceutical for imaging of bone metastases.

Keywords: biodistribution, Ga-68, imaging, PDTMP

Procedia PDF Downloads 351
28062 Major Histocompatibility Complex (MHC) Polymorphism and Disease Resistance

Authors: Oya Bulut, Oguzhan Avci, Zafer Bulut, Atilla Simsek

Abstract:

Livestock breeders have focused on the improvement of production traits with little or no attention for improvement of disease resistance traits. In order to determine the association between the genetic structure of the individual gene loci with possibility of the occurrence and the development of diseases, MHC (major histocompatibility complex) are frequently used. Because of their importance in the immune system, MHC locus is considered as candidate genes for resistance/susceptibility against to different diseases. Major histocompatibility complex (MHC) molecules play a critical role in both innate and adaptive immunity and have been considered candidate molecular markers of an association between polymorphisms and resistance/susceptibility to diseases. The purpose of this study is to give some information about MHC genes become an important area of study in recent years in terms of animal husbandry and determine the relation between MHC genes and resistance/susceptibility to disease.

Keywords: MHC, polymorphism, disease, resistance

Procedia PDF Downloads 627
28061 Aqueous Extract of Picrorrhiza kurroa Royle ex Benth: A Potent Inhibitor of Human Topoisomerases

Authors: Syed Asif Hassan, Ritu Barthwal

Abstract:

Topoisomerase I and II α plays a crucial role in the DNA-maintenance in all living cells, and for this reason, inhibitors of this enzyme have been much studied. In this paper, we have described the inhibitory effect of the aqueous extract of Picrorrhiza kurroa on human topoisomerases by measuring the relaxation of superhelical plasmid pBR322 DNA. The aqueous extract inhibited topoisomerase I and II α in a concentration-dependent manner (Inhibitory concentration (IC) ≈ 25 and 50 µg, respectively). By stabilization studies of topoisomerase I-DNA complex and preincubation studies of topoisomerase I and II α with the extract; we conclude that the possible mechanism of inhibition is both; 1) stabilization of covalent complex of topo I-DNA complex and 2) direct inhibition of the enzyme topoisomerases. These findings might explain the antineoplastic activity of Picrorrhiza kurroa and encourage new studies to elucidate the usefulness of the extract as a potent antineoplastic agent.

Keywords: Picrorrhiza kurroa, topoisomerase I and II α, inhibition, antineoplastic agent

Procedia PDF Downloads 335
28060 Data-Driven Decision Making: A Reference Model for Organizational, Educational and Competency-Based Learning Systems

Authors: Emanuel Koseos

Abstract:

Data-Driven Decision Making (DDDM) refers to making decisions that are based on historical data in order to inform practice, develop strategies and implement policies that benefit organizational settings. In educational technology, DDDM facilitates the implementation of differential educational learning approaches such as Educational Data Mining (EDM) and Competency-Based Education (CBE), which commonly target university classrooms. There is a current need for DDDM models applied to middle and secondary schools from a concern for assessing the needs, progress and performance of students and educators with respect to regional standards, policies and evolution of curriculums. To address these concerns, we propose a DDDM reference model developed using educational key process initiatives as inputs to a machine learning framework implemented with statistical software (SAS, R) to provide a best-practices, complex-free and automated approach for educators at their regional level. We assessed the efficiency of the model over a six-year period using data from 45 schools and grades K-12 in the Langley, BC, Canada regional school district. We concluded that the model has wider appeal, such as business learning systems.

Keywords: competency-based learning, data-driven decision making, machine learning, secondary schools

Procedia PDF Downloads 167
28059 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: big data, learning analytics, analytics, big data in education, Hadoop

Procedia PDF Downloads 414
28058 Heterodimetallic Ferrocenyl Dithiophosphonate Complexes of Nickel(II), Zinc(II) and Cadmium(II) as High Efficiency Co-Sensitizers in Dye-Sensitized Solar Cells

Authors: Tomilola J. Ajayi, Moses Ollengo, Lukas le Roux, Michael N. Pillay, Richard J. Staples, Shannon M. Biros Werner E. van Zyl

Abstract:

The formation, characterization, and dye-sensitized solar cell application of nickel(II), zinc(II) and cadmium(II) ferrocenyl dithiophosphonate complexes were investigated. The multidentate monoanionic ligand [S₂PFc(OH)]¯ (L1) was synthesized from the reaction between ferrocenyl Lawesson’s reagent, [FcP(=S)μ-S]₂ (FcLR), (Fc = ferrocenyl) and water. Ligand L1 could potentially coordinate to metal centers through the S, S’ and O donor atoms. The reaction between metal salt precursors and L1 produced a Ni(II) complex of the type [Ni{S₂P(Fc)(OH)}₂] (1) (molar ratio 1:2), a tetranickel (II) complex of the type [Ni₂{S₂OP(Fc)}₂]₂ (2) (molar ratio (1:1), as well as a Zn(II) complex [Zn{S₂P(Fc)(OH)}₂]₂ (3), and a Cd(II) complex [Cd{S₂P(Fc)(OH)}₂]₂ (4). Complexes 1-4 were characterized by 1H and 31P NMR and FT-IR, and complexes 1 and 2 were additionally analysed by X-Ray crystallography. After co-sensitization, the DSSCs were characterized using UV-Vis, cyclic voltammetry, electrochemical impedance spectroscopy, and photovoltaic measurements (I-V curves). Overall finding shows that co-sensitization of our compounds with ruthenium dye N719 resulted in a better overall solar conversion efficiency than only pure N719 dye under the same experimental conditions. In conclusion, we report the first examples of dye-sensitized solar cells (DSSCs) co-sensitized with ferrocenyl dithiophosphonate complexes.

Keywords: dithiophosphonate, dye sensitized solar cell, co-sensitization, solar efficiency

Procedia PDF Downloads 145
28057 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 128
28056 Novel Nickel Complex Compound Reactivates the Apoptotic Network, Cell Cycle Arrest and Cytoskeletal Rearrangement in Human Colon and Breast Cancer Cells

Authors: Nima Samie, Batoul Sadat Haerian, Sekaran Muniandy, M. S. Kanthimathi

Abstract:

Colon and breast cancers are categorized as the most prevalent types of cancer worldwide. Recently, the broad clinical application of metal complex compounds has led to the discovery of potential therapeutic drugs. The aim of this study was to evaluate the cytotoxic action of a selected nickel complex compound (NCC) against human colon and breast cancer cells. In this context, we determined the potency of the compound in the induction of apoptosis, cell cycle arrest, and cytoskeleton rearrangement. HT-29, WiDr, CCD-18Co, MCF-7 and Hs 190.T cell lines were used to determine the IC50 of the compound using the MTT assay. Analysis of apoptosis was carried out using immunofluorescence, acridine orange/ propidium iodide double staining, Annexin-V-FITC assay, evaluation of the translocation of NF-kB, oxygen radical antioxidant capacity, quenching of reactive oxygen species content , measurement of LDH release, caspase-3/-7, -8 and -9 assays and western blotting. The cell cycle arrest was examined using flowcytometry and gene expression was assessed using qPCR array. Results showed that our nickel complex compound displayed a potent suppressive effect on HT-29, WiDr, MCF-7 and Hs 190.T after 24 h of treatment with IC50 value of 2.02±0.54, 2.13±0.65, 3.76±015 and 3.14±0.45 µM respectively. This cytotoxic effect on normal cells was insignificant. Dipping in the mitochondrial membrane potential and increased release of cytochrome c from the mitochondria indicated induction of the intrinsic apoptosis pathway by the nickel complex compound. Activation of this pathway was further evidenced by significant activation of caspase 9 and 3/7.The nickel complex compound (NCC) was also shown activate the extrinsic pathways of apoptosis by activation of caspase-8 which is linked to the suppression of NF-kB translocation to the nucleus. Cell cycle arrest in the G1 phase and up-regulation of glutathione reductase, based on excessive ROS production were also observed. The results of this study suggest that the nickel complex compound is a potent anti-cancer agent inducing both intrinsic and extrinsic pathways as well as cell cycle arrest in colon and breast cancer cells.

Keywords: nickel complex, apoptosis, cytoskeletal rearrangement, colon cancer, breast cancer

Procedia PDF Downloads 308
28055 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals

Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti

Abstract:

Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.

Keywords: neuroinformatics, bioinformatics, network tools, brain mapping

Procedia PDF Downloads 171
28054 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 155
28053 Significance of Transient Data and Its Applications in Turbine Generators

Authors: Chandra Gupt Porwal, Preeti C. Porwal

Abstract:

Transient data reveals much about the machine's condition that steady-state data cannot. New technologies make this information much more available for evaluating the mechanical integrity of a machine train. Recent surveys at various stations indicate that simplicity is preferred over completeness in machine audits throughout the power generation industry. This is most clearly shown by the number of rotating machinery predictive maintenance programs in which only steady-state vibration amplitude is trended while important transient vibration data is not even acquired. Efforts have been made to explain what transient data is, its importance, the types of plots used for its display, and its effective utilization for analysis. In order to demonstrate the value of measuring transient data and its practical application in rotating machinery for resolving complex and persistent issues with turbine generators, the author presents a few case studies that highlight the presence of rotor instabilities due to the shaft moving towards the bearing centre in a 100 MM LMZ unit located in the Northern Capital Region (NCR), heavy misalignment noticed—especially after 2993 rpm—caused by loose coupling bolts, which prevented the machine from being synchronized for more than four months in a 250 MW KWU unit in the Western Region (WR), and heavy preload noticed at Intermediate pressure turbine (IPT) bearing near HP- IP coupling, caused by high points on coupling faces at a 500 MW KWU unit in the Northern region (NR), experienced at Indian power plants.

Keywords: transient data, steady-state-data, intermediate -pressure-turbine, high-points

Procedia PDF Downloads 61
28052 Project-Based Learning in Engineering Education

Authors: M. Greeshma, V. Ashvini, P. Jayarekha

Abstract:

Project based learning (PBL) is a student-driven educational framework and offers the student an opportunity for in-depth investigations of courses. This paper presents the need of PBL in engineering education for the student to graduate with a capacity to design and implement complex problems. The implementation strategy of PBL and its related challenges are presented. The case study that energizes the engineering curriculum with a relevance to the real-world of technology along with its benefits to the students is also included.

Keywords: PBL, engineering education, curriculum, implement complex

Procedia PDF Downloads 468
28051 Failure to Replicate the Unconscious Thought Advantages

Authors: Vladimíra Čavojová, Eva Ballová Mikušková

Abstract:

In this study we tried to replicate the unconscious thought advantage (UTA), which states that complex decisions are better handled by unconscious thinking. We designed an experiment in e-prime using similar material as the original study (choosing between four different apartments, each described by 12 attributes). A total of 73 participants (52 women (71.2%); 18 to 62 age: M=24.63; SD=8.7) took part in the experiment. We did not replicate the results suggested by UTT. However, from the present study we cannot conclude whether this was the case of flaws in the theory or flaws in our experiment and we discuss several ways in which the issue of UTA could be examined further.

Keywords: decision making, unconscious thoughts, UTT, complex decisions

Procedia PDF Downloads 304
28050 Different Tools and Complex Approach for Improving Phytoremediation Technology

Authors: T. Varazi, M. Pruidze, M. Kurashvili, N. Gagelidze, M. Sutton

Abstract:

The complex phytoremediation approach given in the presented work implies joint application of natural sorbents, microorganisms, natural biosurfactants and plants. The approach is based on using the natural mineral composites, microorganism strains with high detoxification abilities, plants-phytoremediators and natural biosurfactants for enhancing the uptake of intermediates of pollutants by plant roots. In this complex strategy of phytoremediation technology, the sorbent serves to uptake and trap the pollutants and thus restrain their emission in the environment. The role of microorganisms is to accomplish the first stage biodegradation of organic contaminants. This is followed by application of a phytoremediation technology through purposeful planting of selected plants. Thus, using of different tools will provide restoration of polluted environment and prevention of toxic compounds’ dissemination from hotbeds of pollution for a considerable length of time. The main idea and novelty of the carried out work is the development of a new approach for the ecological safety. The wide spectrum of contaminants: Organochlorine pesticide – DDT, heavy metal –Cu, oil hydrocarbon (hexadecane) and wax have been used in this work. The presented complex biotechnology is important from the viewpoint of prevention, providing total rehabilitation of soil. It is unique to chemical pollutants, ecologically friendly and provides the control of erosion of soils.

Keywords: bioremediation, phytoremediation, pollutants, soil contamination

Procedia PDF Downloads 290
28049 Performance Analysis of Scalable Secure Multicasting in Social Networking

Authors: R. Venkatesan, A. Sabari

Abstract:

Developments of social networking internet scenario are recommended for the requirements of scalable, authentic, secure group communication model like multicasting. Multicasting is an inter network service that offers efficient delivery of data from a source to multiple destinations. Even though multicast has been very successful at providing an efficient and best-effort data delivery service for huge groups, it verified complex process to expand other features to multicast in a scalable way. Separately, the requirement for secure electronic information had become gradually more apparent. Since multicast applications are deployed for mainstream purpose the need to secure multicast communications will become significant.

Keywords: multicasting, scalability, security, social network

Procedia PDF Downloads 288
28048 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 94
28047 Structuring and Visualizing Healthcare Claims Data Using Systems Architecture Methodology

Authors: Inas S. Khayal, Weiping Zhou, Jonathan Skinner

Abstract:

Healthcare delivery systems around the world are in crisis. The need to improve health outcomes while decreasing healthcare costs have led to an imminent call to action to transform the healthcare delivery system. While Bioinformatics and Biomedical Engineering have primarily focused on biological level data and biomedical technology, there is clear evidence of the importance of the delivery of care on patient outcomes. Classic singular decomposition approaches from reductionist science are not capable of explaining complex systems. Approaches and methods from systems science and systems engineering are utilized to structure healthcare delivery system data. Specifically, systems architecture is used to develop a multi-scale and multi-dimensional characterization of the healthcare delivery system, defined here as the Healthcare Delivery System Knowledge Base. This paper is the first to contribute a new method of structuring and visualizing a multi-dimensional and multi-scale healthcare delivery system using systems architecture in order to better understand healthcare delivery.

Keywords: health informatics, systems thinking, systems architecture, healthcare delivery system, data analytics

Procedia PDF Downloads 345
28046 Encapsulation of Probiotic Bacteria in Complex Coacervates

Authors: L. A. Bosnea, T. Moschakis, C. Biliaderis

Abstract:

Two probiotic strains of Lactobacillus paracasei subsp. paracasei (E6) and Lactobacillus paraplantarum (B1), isolated from traditional Greek dairy products, were microencapsulated by complex coacervation using whey protein isolate (WPI, 3% w/v) and gum arabic (GA, 3% w/v) solutions mixed at different polymer ratio (1:1, 2:1 and 4:1). The effect of total biopolymer concentration on cell viability was assessed using WPI and GA solutions of 1, 3 and 6% w/v at a constant ratio of 2:1. Also, several parameters were examined for optimization of the microcapsule formation, such as inoculum concentration and the effect of ionic strength. The viability of the bacterial cells during heat treatment and under simulated gut conditions was also evaluated. Among the different WPI/GA weight ratios tested (1:1, 2:1, and 4:1), the highest survival rate was observed for the coacervate structures made with the ratio of 2:1. The protection efficiency at low pH values is influenced by both concentration and the ratio of the added biopolymers. Moreover, the inoculum concentration seems to affect the efficiency of microcapsules to entrap the bacterial cells since an optimum level was noted at less than 8 log cfu/ml. Generally, entrapment of lactobacilli in the complex coacervate structure enhanced the viability of the microorganisms when exposed to a low pH environment (pH 2.0). Both encapsulated strains retained high viability in simulated gastric juice (>73%), especially in comparison with non-encapsulated (free) cells (<19%). The encapsulated lactobacilli also exhibited enhanced viability after 10–30 min of heat treatment (65oC) as well as at different NaCl concentrations (pH 4.0). Overall, the results of this study suggest that complex coacervation with WPI/GA has a potential to deliver live probiotics in low pH food systems and fermented dairy products; the complexes can dissolve at pH 7.0 (gut environment), releasing the microbial cells.

Keywords: probiotic, complex coacervation, whey, encapsulation

Procedia PDF Downloads 292
28045 Integration of Resistivity and Seismic Refraction Using Combine Inversion for Ancient River Findings at Sungai Batu, Lembah Bujang, Malaysia

Authors: Rais Yusoh, Rosli Saad, Mokhtar Saidin, Fauzi Andika, Sabiu Bala Muhammad

Abstract:

Resistivity and seismic refraction profiling have become a common method in pre-investigations for visualizing subsurface structure. The integration of the methods could reduce an interpretation ambiguity. Both methods have their individual software packages for data inversion, but potential to combine certain geophysical methods are restricted; however, the research algorithms that have this functionality was existed and are evaluated personally. The interpretation of subsurface were improve by combining inversion data from both methods by influence each other models using closure coupling; thus, by implementing both methods to support each other which could improve the subsurface interpretation. These methods were applied on a field dataset from a pre-investigation for archeology in finding the ancient river. There were no major changes in the inverted model by combining data inversion for this archetype which probably due to complex geology. The combine data analysis provides an additional technique for interpretation such as an alluvium, which can have strong influence on the ancient river findings.

Keywords: ancient river, combine inversion, resistivity, seismic refraction

Procedia PDF Downloads 325
28044 In vivo Mechanical Characterization of Facial Skin Combining Digital Image Correlation and Finite Element

Authors: Huixin Wei, Shibin Wang, Linan Li, Lei Zhou, Xinhao Tu

Abstract:

Facial skin is a biomedical material with complex mechanical properties of anisotropy, viscoelasticity, and hyperelasticity. The mechanical properties of facial skin are crucial for a number of applications including facial plastic surgery, animation, dermatology, cosmetic industry, and impact biomechanics. Skin is a complex multi-layered material which can be broadly divided into three main layers, the epidermis, the dermis, and the hypodermis. Collagen fibers account for 75% of the dry weight of dermal tissue, and it is these fibers which are responsible for the mechanical properties of skin. Many research on the anisotropic mechanical properties are mainly concentrated on in vitro, but there is a great difference between in vivo and in vitro for mechanical properties of the skin. In this study, we presented a method to measure the mechanical properties of facial skin in vivo. Digital image correlation (DIC) and indentation tests were used to obtain the experiment data, including the deformation of facial surface and indentation force-displacement curve. Then, the experiment was simulated using a finite element (FE) model. Application of Computed Tomography (CT) and reconstruction techniques obtained the real tissue geometry. A three-dimensional FE model of facial skin, including a bi-layer system, was obtained. As the epidermis is relatively thin, the epidermis and dermis were regarded as one layer and below it was hypodermis in this study. The upper layer was modeled as a Gasser-Ogden-Holzapfel (GOH) model to describe hyperelastic and anisotropic behaviors of the dermis. The under layer was modeled as a linear elastic model. In conclusion, the material properties of two-layer were determined by minimizing the error between the FE data and experimental data.

Keywords: facial skin, indentation test, finite element, digital image correlation, computed tomography

Procedia PDF Downloads 106
28043 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 102
28042 Estimating Groundwater Seepage Rates: Case Study at Zegveld, Netherlands

Authors: Wondmyibza Tsegaye Bayou, Johannes C. Nonner, Joost Heijkers

Abstract:

This study aimed to identify and estimate dynamic groundwater seepage rates using four comparative methods; the Darcian approach, the water balance approach, the tracer method, and modeling. The theoretical background to these methods is put together in this study. The methodology was applied to a case study area at Zegveld following the advice of the Water Board Stichtse Rijnlanden. Data collection has been from various offices and a field campaign in the winter of 2008/09. In this complex confining layer of the study area, the location of the phreatic groundwater table is at a shallow depth compared to the piezometric water level. Data were available for the model years 1989 to 2000 and winter 2008/09. The higher groundwater table shows predominately-downward seepage in the study area. Results of the study indicated that net recharge to the groundwater table (precipitation excess) and the ditch system are the principal sources for seepage across the complex confining layer. Especially in the summer season, the contribution from the ditches is significant. Water is supplied from River Meije through a pumping system to meet the ditches' water demand. The groundwater seepage rate was distributed unevenly throughout the study area at the nature reserve averaging 0.60 mm/day for the model years 1989 to 2000 and 0.70 mm/day for winter 2008/09. Due to data restrictions, the seepage rates were mainly determined based on the Darcian method. Furthermore, the water balance approach and the tracer methods are applied to compute the flow exchange within the ditch system. The site had various validated groundwater levels and vertical flow resistance data sources. The phreatic groundwater level map compared with TNO-DINO groundwater level data values overestimated the groundwater level depth by 28 cm. The hydraulic resistance values obtained based on the 3D geological map compared with the TNO-DINO data agreed with the model values before calibration. On the other hand, the calibrated model significantly underestimated the downward seepage in the area compared with the field-based computations following the Darcian approach.

Keywords: groundwater seepage, phreatic water table, piezometric water level, nature reserve, Zegveld, The Netherlands

Procedia PDF Downloads 80
28041 ROOP: Translating Sequential Code Fragments to Distributed Code Fragments Using Deep Reinforcement Learning

Authors: Arun Sanjel, Greg Speegle

Abstract:

Every second, massive amounts of data are generated, and Data Intensive Scalable Computing (DISC) frameworks have evolved into effective tools for analyzing such massive amounts of data. Since the underlying architecture of these distributed computing platforms is often new to users, building a DISC application can often be time-consuming and prone to errors. The automated conversion of a sequential program to a DISC program will consequently significantly improve productivity. However, synthesizing a user’s intended program from an input specification is complex, with several important applications, such as distributed program synthesizing and code refactoring. Existing works such as Tyro and Casper rely entirely on deductive synthesis techniques or similar program synthesis approaches. Our approach is to develop a data-driven synthesis technique to identify sequential components and translate them to equivalent distributed operations. We emphasize using reinforcement learning and unit testing as feedback mechanisms to achieve our objectives.

Keywords: program synthesis, distributed computing, reinforcement learning, unit testing, DISC

Procedia PDF Downloads 95
28040 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria

Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan

Abstract:

Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.

Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM

Procedia PDF Downloads 136
28039 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 66
28038 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.

Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model

Procedia PDF Downloads 33
28037 A Systematic Review of the Psychometric Properties of Augmentative and Alternative Communication Assessment Tools in Adolescents with Complex Communication Needs

Authors: Nadwah Onwi, Puspa Maniam, Azmawanie A. Aziz, Fairus Mukhtar, Nor Azrita Mohamed Zin, Nurul Haslina Mohd Zin, Nurul Fatehah Ismail, Mohamad Safwan Yusoff, Susilidianamanalu Abd Rahman, Siti Munirah Harris, Maryam Aizuddin

Abstract:

Objective: Malaysia has a growing number of individuals with complex communication needs (CCN). The initiation of augmentative and alternative communication (AAC) intervention may facilitate individuals with CCN to understand and express themselves optimally and actively participate in activities in their daily life. AAC is defined as multimodal use of communication ability to allow individuals to use every mode possible to communicate with others using a set of symbols or systems that may include the symbols, aids, techniques, and strategies. It is consequently critical to evaluate the deficits to inform treatment for AAC intervention. However, no known measurement tools are available to evaluate the user with CCN available locally. Design: A systematic review (SR) is designed to analyze the psychometric properties of AAC assessment for adolescents with CCN published in peer-reviewed journals. Tools are rated by the methodological quality of studies and the psychometric measurement qualities of each tool. Method: A literature search identifying AAC assessment tools with psychometrically robust properties and conceptual framework was considered. Two independent reviewers screened the abstracts and full-text articles and review bibliographies for further references. Data were extracted using standardized forms and study risk of bias was assessed. Result: The review highlights the psychometric properties of AAC assessment tools that can be used by speech-language therapists applicable to be used in the Malaysian context. The work outlines how systematic review methods may be applied to the consideration of published material that provides valuable data to initiate the development of Malay Language AAC assessment tools. Conclusion: The synthesis of evidence has provided a framework for Malaysia Speech-Language therapists in making an informed decision for AAC intervention in our standard operating procedure in the Ministry of Health, Malaysia.

Keywords: augmentative and alternative communication, assessment, adolescents, complex communication needs

Procedia PDF Downloads 146
28036 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 71
28035 Geomagnetic Jerks Observed in Geomagnetic Observatory Data Over Southern Africa Between 2017 and 2023

Authors: Sanele Lionel Khanyile, Emmanuel Nahayo

Abstract:

Geomagnetic jerks are jumps observed in the second derivative of the main magnetic field that occurs on annual to decadal timescales. Understanding these jerks is crucial as they provide valuable insights into the complex dynamics of the Earth’s outer liquid core. In this study, we investigate the occurrence of geomagnetic jerks in geomagnetic observatory data collected at southern African magnetic observatories, Hermanus (HER), Tsumeb (TSU), Hartebeesthoek (HBK) and Keetmanshoop (KMH) between 2017 and 2023. The observatory data was processed and analyzed by retaining quiet night-time data recorded during quiet geomagnetic activities with the help of Kp, Dst, and ring current RC indices. Results confirm the occurrence of the 2019-2020 geomagnetic jerk in the region and identify the recent 2021 jerk detected with V-shaped secular variation changes in X and Z components at all four observatories. The highest estimated 2021 jerk secular acceleration amplitudes in X and Z components were found at HBK, 12.7 nT/year² and 19. 1 nT/year², respectively. Notably, the global CHAOS-7 model aptly identifies this 2021 jerk in the Z component at all magnetic observatories in the region.

Keywords: geomagnetic jerks, secular variation, magnetic observatory data, South Atlantic Anomaly

Procedia PDF Downloads 66