Search results for: Hidden Markov Chain
1351 Gender Based Variability Time Series Complexity Analysis
Authors: Ramesh K. Sunkaria, Puneeta Marwaha
Abstract:
Nonlinear methods of heart rate variability (HRV) analysis are becoming more popular. It has been observed that complexity measures quantify the regularity and uncertainty of cardiovascular RR-interval time series. In the present work, SampEn has been evaluated in healthy Normal Sinus Rhythm (NSR) male and female subjects for different data lengths and tolerance level r. It is demonstrated that SampEn is small for higher values of tolerance r. Also SampEn value of healthy female group is higher than that of healthy male group for short data length and with increase in data length both groups overlap each other and it is difficult to distinguish them. The SampEn gives inaccurate results by assigning higher value to female group, because male subject have more complex HRV pattern than that of female subjects. Therefore, this traditional algorithm exhibits higher complexity for healthy female subjects than for healthy male subjects, which is misleading observation. This may be due to the fact that SampEn do not account for multiple time scales inherent in the physiologic time series and the hidden spatial and temporal fluctuations remains unexplored.Keywords: heart rate variability, normal sinus rhythm group, RR interval time series, sample entropy
Procedia PDF Downloads 2821350 Complex Dynamics in a Morphologically Heterogeneous Biological Medium
Authors: Turky Al-Qahtani, Roustem Miftahof
Abstract:
Introduction: Under common assumptions of excitabi-lity, morphological (cellular) homogeneity, and spatial structural anomalies added as required, it has been shown that biological systems are able to display travelling wave dynamics. Being not self-sustainable, existence depends on the electrophysiological state of transmembrane ion channels and it requires an extrinsic/intrinsic periodic source. However, organs in the body are highly multicellular, heterogeneous, and their functionality is the outcome of electro-mechanical conjugation, rather than excitability only. Thus, peristalsis in the gut relies on spatiotemporal myoelectrical pattern formations between the mechanical, represented by smooth muscle cells (SM), and the control, comprised of a chain of primary sensory and motor neurones, components. Synaptically linked through the afferent and efferent pathways, they form a functional unit (FU) of the gut. Aims: These are: i) to study numerically the complex dynamics, and ii) to investigate the possibility of self-sustained myoelectrical activity in the FU. Methods: The FU recreates the following sequence of physiological events: deformation of mechanoreceptors of located in SM; generation and propagation of electrical waves of depolarisation - spikes - along the axon to the soma of the primary neurone; discharge of the primary neurone and spike propagation towards the motor neurone; burst of the motor neurone and transduction of spikes to SM, subsequently producing forces of contraction. These are governed by a system of nonlinear partial and ordinary differential equations being a modified version of the Hodgkin-Huxley model and SM fibre mechanics. In numerical experiments; the source of excitation is mechanical stretches of SM at a fixed amplitude and variable frequencies. Results: Low frequency (0.5 < v < 2 Hz) stimuli cause the propagation of spikes in the neuronal chain and, finally, the generation of active forces by SM. However, induced contractions are not sufficient to initiate travelling wave dynamics in the control system. At frequencies, 2 < v < 4 Hz, multiple low amplitude and short-lasting contractions are observed in SM after the termination of stretching. For frequencies (0.5 < v < 4 Hz), primary and sensory neurones demonstrate strong connectivity and coherent electrical activity. Significant qualitative and quantitative changes in dynamics of myoelectical patterns with a transition to a self-organised mode are recorded with the high degree of stretches at v = 4.5 Hz. Increased rates of deformation lead to the production of high amplitude signals at the mechanoreceptors with subsequent self-sustained excitation within the neuronal chain. Remarkably, the connection between neurones weakens resulting in incoherent firing. Further increase in a frequency of stimulation (v > 4.5 Hz) has a detrimental effect on the system. The mechanical and control systems become disconnected and exhibit uncoordinated electromechanical activity. Conclusion: To our knowledge, the existence of periodic activity in a multicellular, functionally heterogeneous biological system with mechano-electrical dynamics, such as the FU, has been demonstrated for the first time. These findings support the notion of possible peristalsis in the gut even in the absence of intrinsic sources - pacemaker cells. Results could be implicated in the pathogenesis of intestinal dysrythmia, a medical condition associated with motor dysfunction.Keywords: complex dynamics, functional unit, the gut, dysrythmia
Procedia PDF Downloads 2041349 Detection of Polymorphism of Growth Hormone Gene in Holstein Cattle
Authors: Emine Şahin, Murat Soner Balcıoğlu
Abstract:
The aim of this study was to determine the growth hormone (bGH) gene polymorphism in the Holstein cattle growing around Antalya in Turkey. In order to determine the bGH-AluI polymorphism, polymerase chain reaction - restriction fragment length polymorphism (PCR-RFLP) method was performed. A 891 bp fragment of bGH was amplified and two types of alleles C and D for bGH were observed. In this study, the frequencies of C and D alleles were 0.8438 and 0.1562, respectively. The genotype frequencies for CC, CD and DD were 0.787, 0.191 and 0.022, respectively. According to the results of the chi-square test, a significant deviation from the Hardy-Weinberg equilibrium was not determined for the bGH locus in the population.Keywords: Growth Hormone Gene, Holstein , Polymorphism, RFLP
Procedia PDF Downloads 3711348 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 4701347 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 1011346 The Determinants of Country Corruption: Unobserved Heterogeneity and Individual Choice- An empirical Application with Finite Mixture Models
Authors: Alessandra Marcelletti, Giovanni Trovato
Abstract:
Corruption in public offices is found to be the reflection of country-specific features, however, the exact magnitude and the statistical significance of its determinants effect has not yet been identified. The paper aims to propose an estimation method to measure the impact of country fundamentals on corruption, showing that covariates could differently affect the extent of corruption across countries. Thus, we exploit a model able to take into account different factors affecting the incentive to ask or to be asked for a bribe, coherently with the use of the Corruption Perception Index. We assume that discordant results achieved in literature may be explained by omitted hidden factors affecting the agents' decision process. Moreover, assuming homogeneous covariates effect may lead to unreliable conclusions since the country-specific environment is not accounted for. We apply a Finite Mixture Model with concomitant variables to 129 countries from 1995 to 2006, accounting for the impact of the initial conditions in the socio-economic structure on the corruption patterns. Our findings confirm the hypothesis of the decision process of accepting or asking for a bribe varies with specific country fundamental features.Keywords: Corruption, Finite Mixture Models, Concomitant Variables, Countries Classification
Procedia PDF Downloads 2631345 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 2301344 Modelling Urban Rigidity and Elasticity Growth Boundaries: A Spatial Constraints-Suitability Based Perspective
Authors: Pengcheng Xiang Jr., Xueqing Sun, Dong Ngoduy
Abstract:
In the context of rapid urbanization, urban sprawl has brought about extensive negative impacts on ecosystems and the environment, resulting in a gradual shift from "incremental growth" to ‘stock growth’ in cities. A detailed urban growth boundary is a prerequisite for urban renewal and management. This study takes Shenyang City, China, as the study area and evaluates the spatial distribution of urban spatial suitability in the study area from the perspective of spatial constraints-suitability using multi-source data and simulates the future rigid and elastic growth boundaries of the city in the study area using the CA-Markov model. The results show that (1) the suitable construction area and moderate construction area in the study area account for 8.76% and 19.01% of the total area, respectively, and the suitable construction area and moderate construction area show a trend of distribution from the urban centre to the periphery, mainly in Shenhe District, the southern part of Heping District, the western part of Dongling District, and the central part of Dadong District; (2) the area of expansion of construction land in the study area in the period of 2023-2030 is 153274.6977hm2, accounting for 44.39% of the total area of the study area; (3) the rigid boundary of the study area occupies an area of 153274.6977 hm2, accounting for 44.39% of the total area of the study area, and the elastic boundary of the study area contains an area of 75362.61 hm2, accounting for 21.69% of the total area of the study area. The study constructed a method for urban growth boundary delineation, which helps to apply remote sensing to guide future urban spatial growth management and urban renewal.Keywords: urban growth boundary, spatial constraints, spatial suitability, urban sprawl
Procedia PDF Downloads 321343 Incorporation of Noncanonical Amino Acids into Hard-to-Express Antibody Fragments: Expression and Characterization
Authors: Hana Hanaee-Ahvaz, Monika Cserjan-Puschmann, Christopher Tauer, Gerald Striedner
Abstract:
Incorporation of noncanonical amino acids (ncAA) into proteins has become an interesting topic as proteins featured with ncAAs offer a wide range of different applications. Nowadays, technologies and systems exist that allow for the site-specific introduction of ncAAs in vivo, but the efficient production of proteins modified this way is still a big challenge. This is especially true for 'hard-to-express' proteins where low yields are encountered even with the native sequence. In this study, site-specific incorporation of azido-ethoxy-carbonyl-Lysin (azk) into an anti-tumor-necrosis-factor-α-Fab (FTN2) was investigated. According to well-established parameters, possible site positions for ncAA incorporation were determined, and corresponding FTN2 genes were constructed. Each of the modified FTN2 variants has one amber codon for azk incorporated either in its heavy or light chain. The expression level for all variants produced was determined by ELISA, and all azk variants could be produced with a satisfactory yield in the range of 50-70% of the original FTN2 variant. In terms of expression yield, neither the azk incorporation position nor the subunit modified (heavy or light chain) had a significant effect. We confirmed correct protein processing and azk incorporation by mass spectrometry analysis, and antigen-antibody interaction was determined by surface plasmon resonance analysis. The next step is to characterize the effect of azk incorporation on protein stability and aggregation tendency via differential scanning calorimetry and light scattering, respectively. In summary, the incorporation of ncAA into our Fab candidate FTN2 worked better than expected. The quantities produced allowed a detailed characterization of the variants in terms of their properties, and we can now turn our attention to potential applications. By using click chemistry, we can equip the Fabs with additional functionalities and make them suitable for a wide range of applications. We will now use this option in a first approach and develop an assay that will allow us to follow the degradation of the recombinant target protein in vivo. Special focus will be laid on the proteolytic activity in the periplasm and how it is influenced by cultivation/induction conditions.Keywords: degradation, FTN2, hard-to-express protein, non-canonical amino acids
Procedia PDF Downloads 2311342 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study
Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly
Abstract:
In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions
Procedia PDF Downloads 821341 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 6481340 Evaluation of Main Factors Affecting the Choice of a Freight Forwarder: A Sri Lankan Exporter’s Perspective
Authors: Ishani Maheshika
Abstract:
The intermediary role performed by freight forwarders in exportation has become significant in fulfilling businesses’ supply chain needs in this dynamic world. Since the success of exporter’s business is at present, highly reliant on supply chain optimization, cost efficiency, profitability, consistent service and responsiveness, the decision of selecting the most beneficial freight forwarder has become crucial for exporters. Although there are similar foreign researches, prior researches covering Sri Lankan setting are not in existence. Moreover, results vary with time, nature of industry and business environment factors. Therefore, a study from the perspective of Sri Lankan exporters was identified as a requisite to be researched. In order to identify and prioritize key factors which have affected the exporter’s decision in selecting freight forwarders in Sri Lankan context, Sri Lankan export industry was stratified into 22 sectors based on commodity using stratified sampling technique. One exporter from each sector was then selected using judgmental sampling to have a sample of 22. Factors which were identified through a pilot survey, was organized under 6 main criteria. A questionnaire was basically developed as pairwise comparisons using 9-point semantic differential scale and comparisons were done within main criteria and subcriteria. After a pre-testing, interviews and e-mail questionnaire survey were conducted. Data were analyzed using Analytic Hierarchy Process to determine priority vectors of criteria. Customer service was found to be the most important main criterion for Sri Lankan exporters. It was followed by reliability and operational efficiency respectively. The criterion of the least importance is company background and reputation. Whereas small sized exporters pay more attention to rate, reliability is the major concern among medium and large scale exporters. Irrespective of seniority of the exporter, reliability is given the prominence. Responsiveness is the most important sub criterion among Sri Lankan exporters. Consistency of judgments with respect to main criteria was verified through consistency ratio, which was less than 10%. Being more competitive, freight forwarders should come up with customized marketing strategies based on each target group’s requirements and expectations in offering services to retain existing exporters and attract new exporters.Keywords: analytic hierarchy process, freight forwarders, main criteria, Sri Lankan exporters, subcriteria
Procedia PDF Downloads 4061339 A Review on Artificial Neural Networks in Image Processing
Authors: B. Afsharipoor, E. Nazemi
Abstract:
Artificial neural networks (ANNs) are powerful tool for prediction which can be trained based on a set of examples and thus, it would be useful for nonlinear image processing. The present paper reviews several paper regarding applications of ANN in image processing to shed the light on advantage and disadvantage of ANNs in this field. Different steps in the image processing chain including pre-processing, enhancement, segmentation, object recognition, image understanding and optimization by using ANN are summarized. Furthermore, results on using multi artificial neural networks are presented.Keywords: neural networks, image processing, segmentation, object recognition, image understanding, optimization, MANN
Procedia PDF Downloads 4061338 Exploring Managerial Approaches towards Green Manufacturing: A Thematic Analysis
Authors: Hakimeh Masoudigavgani
Abstract:
Since manufacturing firms deplete non-renewable resources and pollute air, soil, and water in greatly unsustainable manner, industrial activities or production of products are considered to be a key contributor to adverse environmental impacts. Hence, management strategies and approaches that involve an effective supply chain decision process in a manufacturing sector could be extremely significant to the application of environmental initiatives. Green manufacturing (GM) is one of these strategies which minimises negative effects on the environment through reducing greenhouse gas emissions, waste, and the consumption of energy and natural resources. This paper aims to explore what greening methods and mechanisms could be applied in the manufacturing supply chain and what are the outcomes of adopting these methods in terms of abating environmental burdens? The study is an interpretive research with an exploratory approach, using thematic analysis by coding text, breaking down and grouping the content of collected literature into various themes and categories. It is found that green supply chain could be attained through execution of some pre-production strategies including green building, eco-design, and green procurement as well as a number of in-production and post-production strategies involving green manufacturing and green logistics. To achieve an effective GM, the pre-production strategies are suggested to be employed. This paper defines GM as (1) the analysis of the ecological impacts generated by practices, products, production processes, and operational functions, and (2) the implementation of greening methods to reduce damaging influences of them on the natural environment. Analysis means assessing, monitoring, and auditing of practices in order to measure and pinpoint their harmful impacts. Moreover, greening methods involved within GM (arranged in order from the least to the most level of environmental compliance and techniques) consist of: •product stewardship (e.g. less use of toxic, non-renewable, and hazardous materials in the manufacture of the product; and stewardship of the environmental problems with regard to the product in all production, use, and end-of-life stages); •process stewardship (e.g. controlling carbon emission, energy and resources usage, transportation method, and disposal; reengineering polluting processes; recycling waste materials generated in production); •lean and clean production practices (e.g. elimination of waste, materials replacement, materials reduction, resource-efficient consumption, energy-efficient usage, emission reduction, managerial assessment, waste re-use); •use of eco-industrial parks (e.g. a shared warehouse, shared logistics management system, energy co-generation plant, effluent treatment). However, the focus of this paper is only on methods related to the in-production phase and needs further research on both pre-production and post-production environmental innovations. The outlined methods in this investigation may possibly be taken into account by policy/decision makers. Additionally, the proposed future research direction and identified gaps can be filled by scholars and researchers. The paper compares and contrasts a variety of viewpoints and enhances the body of knowledge by building a definition for GM through synthesising literature and categorising the strategic concept of greening methods, drivers, barriers, and successful implementing tactics.Keywords: green manufacturing (GM), product stewardship, process stewardship, clean production, eco-industrial parks (EIPs)
Procedia PDF Downloads 5811337 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1171336 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers
Authors: Catherine Vasnetsov, Victor Vasnetsov
Abstract:
Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers
Procedia PDF Downloads 701335 Calculating Non-Unique Sliding Modes for Switched Dynamical Systems
Authors: Eugene Stepanov, Arkadi Ponossov
Abstract:
Ordinary differential equations with switching nonlinearities constitute a very useful tool in many applications. The solutions of such equations can usually be calculated analytically if they cross the discontinuities transversally. Otherwise, one has trajectories that slides along the discontinuity, and the calculations become less straightforward in this case. For instance, one of the problems one faces is non-uniqueness of the sliding modes. In the presentation, it is proposed to apply the theory of hybrid dynamical systems to calculate the solutions that are ‘hidden’ in the discontinuities. Roughly, one equips the underlying switched system with an explicitly designed discrete dynamical system (‘automaton’), which governs the dynamics of the switched system. This construction ‘splits’ the dynamics, which, as it is shown in the presentation, gives uniqueness of the resulting hybrid trajectories and at the same time provides explicit formulae for them. Projecting the hybrid trajectories back onto the original continuous system explains non-uniqueness of its trajectories. The automaton is designed with the help of the attractors of the specially constructed adjoint dynamical system. Several examples are provided in the presentation, which supports the efficiency of the suggested scheme. The method can be of interest in control theory, gene regulatory networks, neural field models and other fields, where switched dynamics is a part of the analysis.Keywords: hybrid dynamical systems, singular perturbation analysis, sliding modes, switched dynamics
Procedia PDF Downloads 1621334 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks
Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam
Abstract:
In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion
Procedia PDF Downloads 1231333 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic
Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi
Abstract:
In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing
Procedia PDF Downloads 2991332 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 971331 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).Keywords: activation function, universal approximation function, neural networks, convergence
Procedia PDF Downloads 1581330 Identity Management in Virtual Worlds Based on Biometrics Watermarking
Authors: S. Bader, N. Essoukri Ben Amara
Abstract:
With the technological development and rise of virtual worlds, these spaces are becoming more and more attractive for cybercriminals, hidden behind avatars and fictitious identities. Since access to these spaces is not restricted or controlled, some impostors take advantage of gaining unauthorized access and practicing cyber criminality. This paper proposes an identity management approach for securing access to virtual worlds. The major purpose of the suggested solution is to install a strong security mechanism to protect virtual identities represented by avatars. Thus, only legitimate users, through their corresponding avatars, are allowed to access the platform resources. Access is controlled by integrating an authentication process based on biometrics. In the request process for registration, a user fingerprint is enrolled and then encrypted into a watermark utilizing a cancelable and non-invertible algorithm for its protection. After a user personalizes their representative character, the biometric mark is embedded into the avatar through a watermarking procedure. The authenticity of the avatar identity is verified when it requests authorization for access. We have evaluated the proposed approach on a dataset of avatars from various virtual worlds, and we have registered promising performance results in terms of authentication accuracy, acceptation and rejection rates.Keywords: identity management, security, biometrics authentication and authorization, avatar, virtual world
Procedia PDF Downloads 2651329 Solving Extended Linear Complementarity Problems (XLCP) - Wood and Environment
Authors: Liberto Pombal, Christian Dieter Jaekel
Abstract:
The objective of this work is to establish theoretical and numerical conditions for Solving Extended Linear Complementarity Problems (XLCP), with emphasis on the Horizontal Linear Complementarity Problem (HLCP). Two new strategies for solving complementarity problems are presented, using differentiable and penalized functions, which resulted in a natural formalization for the Linear Horizontal case. The computational results of all suggested strategies are also discussed in depth in this paper. The implication in practice allows solving and optimizing, in an innovative way, the (forestry) problems of the value chain of the industrial wood sector in Angola.Keywords: complementarity, box constrained, optimality conditions, wood and environment
Procedia PDF Downloads 561328 How Rational Decision-Making Mechanisms of Individuals Are Corrupted under the Presence of Others and the Reflection of This on Financial Crisis Management Situations
Authors: Gultekin Gurcay
Abstract:
It is known that the most crucial influence of the psychological, social and emotional factors that affect any human behavior is to corrupt the rational decision making mechanism of the individuals and cause them to display irrational behaviors. In this regard, the social context of human beings influences the rationality of our decisions, and people tend to display different behaviors when they were alone compared to when they were surrounded by others. At this point, the interaction and interdependence of the behavioral finance and economics with the area of social psychology comes, where intentions and the behaviors of the individuals are being analyzed in the actual or implied presence of others comes into prominence. Within the context of this study, the prevalent theories of behavioral finance, which are The Prospect Theory, The Utility Theory Given Uncertainty and the Five Axioms of Choice under Uncertainty, Veblen’s Hidden Utility Theory, and the concept of ‘Overreaction’ has been examined and demonstrated; and the meaning, existence and validity of these theories together with the social context has been assessed. Finally, in this study the behavior of the individuals in financial crisis situations where the majority of the society is being affected from the same negative conditions at the same time has been analyzed, by taking into account how individual behavior will change according to the presence of the others.Keywords: conditional variance coefficient, financial crisis, garch model, stock market
Procedia PDF Downloads 2401327 Inherited Intergenerational Trauma – The Society for Black People in South Central Los Angeles
Authors: Kevin R. Collins Sr.
Abstract:
In South Central Los Angeles, Black people have endured various forms of trauma that spans across generations. This includes the horrors of slavery and the aftermaths of the Jim Crow Laws, institutionalized racism, and legislative segregation, just to name a few. The individuals born from the 1900’s until today have continued to transmit the traumas experienced across generations. Parents unconsciously transmit the hidden trauma, and the children take these experiences and apply it to the society they live in. Although there are some who attempt to break the cycle of transmitted trauma, the remninsce still remain and play a huge role in how they interact with others. The attempt of this discussion is to bring these traumatic experiences to the surface and attack them head on. It is important that we do this to allow not only the suffering individuals but the suffering society to heal. As a society, looking at the humane side of it and attempting to stop the racial injustice placed on black people to relieve them of the stress that some. If not all,, endure in this great United States of America. Changing the behavior as a country to create an improved since of common unity within. If we solve our own racial and social issues within this country, maybe we can solve these same issues that have been the footstool to the many wars we see around the world. Thus, breaking the cycle of inherited intergenerational trauma.Keywords: intergenerational trauma, inherited trauma, transmission of trauma, blacks in South central LA, black trauma in America
Procedia PDF Downloads 971326 Multiscale Modelling of Citrus Black Spot Transmission Dynamics along the Pre-Harvest Supply Chain
Authors: Muleya Nqobile, Winston Garira
Abstract:
We presented a compartmental deterministic multi-scale model which encompass internal plant defensive mechanism and pathogen interaction, then we consider nesting the model into the epidemiological model. The objective was to improve our understanding of the transmission dynamics of within host and between host of Guignardia citricapa Kiely. The inflow of infected class was scaled down to individual level while the outflow was scaled up to average population level. Conceptual model and mathematical model were constructed to display a theoretical framework which can be used for predicting or identify disease pattern.Keywords: epidemiological model, mathematical modelling, multi-scale modelling, immunological model
Procedia PDF Downloads 4581325 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process
Authors: Ed de Jong
Abstract:
Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability
Procedia PDF Downloads 1541324 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 3531323 An Optimization Modelling to Evaluate Flights Scheduling at Tourist Airports
Authors: Dimitrios J. Dimitriou
Abstract:
Airport’s serving a tourist destination are an essential counterpart of the tourist demand supply chain, and their productivity is related to the region’s attractiveness and is enhanced by the air transport business. In this paper, the evaluation framework of the scheduled flights between two tourist airports is taken into consideration. By adopting a systemic approach, the arrivals from an airport that its connectivity heavily depended on the departures of another major airport are reviewed. The methodology framework, based on inventory control theory and the numerical example, promotes the use of the modelling formulation. The results would be essential for comparison and exercising to other similar cases.Keywords: airport connectivity, inventory control, optimization, optimum allocation
Procedia PDF Downloads 3341322 The Potential for Tourism Development in the Greater Chinhoyi Area in Zimbabwe: A Systems Approach in an Appetizer Destination
Authors: Phillip F. Kanokanga, Patrick W. Mamimine, Molline Mwando, Charity Mapingure
Abstract:
Tourism development tends to follow anchor attractions, including cities and their surroundings, while marginalizing small towns and their environs. This is even though the small towns and their hinterlands can also offer competitive tourism products. The Zimbabwe Tourism Authority (ZTA) gathers visitor statistics of major tourist destinations only thereby sidelining the density of tourist traffic that either passes through or visits the small towns in the country. Unless this problem is addressed, the tourism potential of small towns and their hinterlands will not be fully tapped for economic development. Using qualitative research methodology, this study investigated the opportunities for tourism development in the Greater Chinhoyi Area. The study revealed that the Greater Chinhoyi area had potential for heritage tourism, village tourism, urban tourism, educational tourism, dark tourism, recreational tourism, agrotourism, and nature tourism. There is a need to link the various tourism resources in the Greater Chinhoyi area to anchor attractions in dominant resorts, then develop and present the tourism products in transit towns as ‘appetisers’ or ‘appetisser attractions’ before one gets to the main destination.Keywords: anchor attractions, appetisers, heritage tourism, agrotourism, small towns, tourism corridor, systems approach, hidden treasures
Procedia PDF Downloads 74