Search results for: Weibull distribution model
12168 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method
Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro
Abstract:
Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.Keywords: whitefly, RADseq, invasive species, SNP, climate change
Procedia PDF Downloads 12612167 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 13912166 A Training Perspective for Sustainability and Partnership to Achieve Sustainable Development Goals in Sub-Saharan Africa
Authors: Nwachukwu M. A., Nwachukwu J. I., Anyanwu J., Emeka U., Okorondu J., Acholonu C.
Abstract:
Actualization of the 17 sustainable development goals (SDGs) conceived by the United Nations in 2015 is a global challenge that may not be feasible in sub-Saharan Africa by the year 2030, except universities play a committed role. This is because; there is a need to educate the people about the concepts of sustainability and sustainable development in the region to make the desired change. Here is a sensitization paper with a model of intervention and curricular planning to allow advancement in understanding and knowledge of SDGs. This Model Center for Sustainability Studies (MCSS) will enable partnerships with institutions in Africa and in advanced nations, thereby creating a global network for sustainability studies not found in sub-Saharan Africa. MCSS will train and certify public servants, government agencies, policymakers, entrepreneurs and personnel from organizations, and students on aspects of the SDGs and sustainability science. There is a need to add sustainability knowledge into environmental education and make environmental education a compulsory course in higher institutions and a secondary school certificate exam subject in sub-Saharan Africa. MCSS has 11 training modules that can be replicated anywhere in the world.Keywords: sustainability, higher institutions, training, SDGs, collaboration, sub-Saharan Africa
Procedia PDF Downloads 9912165 Innovative Pump Design Using the Concept of Viscous Fluid Sinusoidal Excitation
Authors: Ahmed H. Elkholy
Abstract:
The concept of applying a prescribed oscillation to viscous fluids to aid or increase flow is used to produce a maintenance free pump. Application of this technique to fluids presents unique problems such as physical separation; control of heat and mass transfer in certain industrial applications; and improvement of some fluid process methods. The problem as stated is to obtain the velocity distribution, wall shear stress and energy expended when a pipe containing a stagnant viscous fluid is externally excited by a sinusoidal pulse, one end of the pipe being pinned. On the other hand, the effect of different parameters on the results are presented. Such parameters include fluid viscosity, frequency of oscillations and pipe geometry. It was found that the flow velocity through the pump is maximum at the pipe wall, and it decreases rapidly towards the pipe centerline. The frequency of oscillation should be above a certain value in order to obtain meaningful flow velocity. The amount of energy absorbed in the system is mainly due to pipe wall strain energy, while the fluid pressure and kinetic energies are comparatively small.Keywords: sinusoidal excitation, pump, shear stress, flow
Procedia PDF Downloads 31512164 Model Based Fault Diagnostic Approach for Limit Switches
Authors: Zafar Mahmood, Surayya Naz, Nazir Shah Khattak
Abstract:
The degree of freedom relates to our capability to observe or model the energy paths within the system. Higher the number of energy paths being modeled leaves to us a higher degree of freedom, but increasing the time and modeling complexity rendering it useless for today’s world’s need for minimum time to market. Since the number of residuals that can be uniquely isolated are dependent on the number of independent outputs of the system, increasing the number of sensors required. The examples of discrete position sensors that may be used to form an array include limit switches, Hall effect sensors, optical sensors, magnetic sensors, etc. Their mechanical design can usually be tailored to fit in the transitional path of an STME in a variety of mechanical configurations. The case studies into multi-sensor system were carried out and actual data from sensors is used to test this generic framework. It is being investigated, how the proper modeling of limit switches as timing sensors, could lead to unified and neutral residual space while keeping the implementation cost reasonably low.Keywords: low-cost limit sensors, fault diagnostics, Single Throw Mechanical Equipment (STME), parameter estimation, parity-space
Procedia PDF Downloads 61712163 The Impact of Voluntary Disclosure Level on the Cost of Equity Capital in Tunisian's Listed Firms
Authors: Nouha Ben Salah, Mohamed Ali Omri
Abstract:
This paper treats the association between disclosure level and the cost of equity capital in Tunisian’slisted firms. This relation is tested by using two models. The first is used for testing this relation directly by regressing firm specific estimates of cost of equity capital on market beta, firm size and a measure of disclosure level. The second model is used for testing this relation by introducing information asymmetry as mediator variable. This model is suggested by Baron and Kenny (1986) to demonstrate the role of mediator variable in general. Based on a sample of 21 non-financial Tunisian’s listed firms over a period from 2000 to 2004, the results prove that greater disclosure is associated with a lower cost of equity capital. However, the results of indirect relationship indicate a significant positive association between the level of voluntary disclosure and information asymmetry and a significant negative association between information asymmetry and cost of equity capital in contradiction with our previsions. Perhaps this result is due to the biases of measure of information asymmetry.Keywords: cost of equity capital, voluntary disclosure, information asymmetry, and Tunisian’s listed non-financial firms
Procedia PDF Downloads 51712162 Glorification Trap in Combating Human Trafficking in Indonesia: An Application of Three-Dimensional Model of Anti-Trafficking Policy
Authors: M. Kosandi, V. Susanti, N. I. Subono, E. Kartini
Abstract:
This paper discusses the risk of glorification trap in combating human trafficking, as it is shown in the case of Indonesia. Based on a research on Indonesian combat against trafficking in 2017-2018, this paper shows the tendency of misinterpretation and misapplication of the Indonesian anti-trafficking law into misusing the law for glorification, to create an image of certain extent of achievement in combating human trafficking. The objective of this paper is to explain the persistent occurrence of human trafficking crimes despite the significant progress of anti-trafficking efforts of Indonesian government. The research was conducted in 2017-2018 by qualitative approach through observation, depth interviews, discourse analysis, and document study, applying the three-dimensional model for analyzing human trafficking in the source country. This paper argues that the drive for glorification of achievement in the combat against trafficking has trapped Indonesian government in the loop of misinterpretation, misapplication, and misuse of the anti-trafficking law. In return, the so-called crime against humanity remains high and tends to increase in Indonesia.Keywords: human trafficking, anti-trafficking policy, transnational crime, source country, glorification trap
Procedia PDF Downloads 16712161 The Cost of Non-Communicable Diseases in the European Union: A Projection towards the Future
Authors: Desiree Vandenberghe, Johan Albrecht
Abstract:
Non-communicable diseases (NCDs) are responsible for the vast majority of deaths in the European Union (EU) and represent a large share of total health care spending. A future increase in this health and financial burden is likely to be driven by population ageing, lifestyle changes and technological advances in medicine. Without adequate prevention measures, this burden can severely threaten population health and economic development. To tackle this challenge, a correct assessment of the current burden of NCDs is required, as well as a projection of potential increases of this burden. The contribution of this paper is to offer perspective on the evolution of the NCD burden towards the future and to give an indication of the potential of prevention policy. A Non-Homogenous, Semi-Markov model for the EU was constructed, which allowed for a projection of the cost burden for the four main NCDs (cancer, cardiovascular disease, chronic respiratory disease and diabetes mellitus) towards 2030 and 2050. This simulation is done based on multiple baseline scenarios that vary in demand and supply factors such as health status, population structure, and technological advances. Finally, in order to assess the potential of preventive measures to curb the cost explosion of NCDs, a simulation is executed which includes increased efforts for preventive health care measures. According to the Markov model, by 2030 and 2050, total costs (direct and indirect costs) in the EU could increase by 30.1% and 44.1% respectively, compared to 2015 levels. An ambitious prevention policy framework for NCDs will be required if the EU wants to meet this challenge of rising costs. To conclude, significant cost increases due to Non-Communicable Diseases are likely to occur due to demographic and lifestyle changes. Nevertheless, an ambitious prevention program throughout the EU can aid in making this cost burden manageable for future generations.Keywords: non-communicable diseases, preventive health care, health policy, Markov model, scenario analysis
Procedia PDF Downloads 14012160 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer
Authors: Feng-Sheng Wang, Chao-Ting Cheng
Abstract:
Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution
Procedia PDF Downloads 8012159 Effectiveness of Crystallization Coating Materials on Chloride Ions Ingress in Concrete
Authors: Mona Elsalamawy, Ashraf Ragab Mohamed, Abdellatif Elsayed Abosen
Abstract:
This paper aims to evaluate the effectiveness of different crystalline coating materials concerning of chloride ions penetration. The concrete ages at the coating installation and its moisture conditions were addressed; where, these two factors may play a dominant role for the effectiveness of the used materials. Rapid chloride ions penetration test (RCPT) was conducted at different ages and moisture conditions according to the relevant standard. In addition, the contaminated area and the penetration depth of the chloride ions were investigated immediately after the RCPT test using chemical identifier, 0.1 M silver nitrate AgNO3 solution. Results have shown that, the very low chloride ions penetrability, for the studied crystallization materials, were investigated only with the old age concrete (G1). The significant reduction in chloride ions’ penetrability was illustrated after 7 days of installing the crystalline coating layers. Using imageJ is more reliable to describe the contaminated area of chloride ions, where the distribution of aggregate and heterogeneous of cement mortar was considered in the images analysis.Keywords: chloride permeability, contaminated area, crystalline waterproofing materials, RCPT, XRD
Procedia PDF Downloads 25112158 Actual and Perceived Financial Sophistication and Wealth Accumulation: The Role of Education and Gender
Authors: Christina E. Bannier, Milena Neubert
Abstract:
This study examines the role of actual and perceived financial sophistication (i.e., financial literacy and confidence) for individuals’ wealth accumulation. Using survey data from the German SAVE initiative, we find strong gender- and education-related differences in the distribution of the two variables: Whereas financial literacy rises in formal education, confidence increases in education for men but decreases for women. As a consequence, highly-educated women become strongly underconfident, while men remain overconfident. We show that these differences influence wealth accumulation: The positive effect of financial literacy is stronger for women than for men and is increasing in women’s education but decreasing in men’s. For highly-educated men, however, overconfidence closes this gap by increasing wealth via stronger financial engagement. Interestingly, female underconfidence does not reduce current wealth levels though it weakens future-oriented financial engagement and may thus impair future wealth accumulation.Keywords: financial literacy, financial sophistication, confidence, wealth, household finance, behavioral finance, gender, formal education
Procedia PDF Downloads 26812157 Downstream Supply Chain Collaboration: The Cornerstone of the Global Supply Chain
Authors: Fatiha Naaoui-Outini
Abstract:
Purpose – The purpose of this paper is to shed light on how a Downstream Supply Chain facilitated the Customer Service Performance (BTB) by more collaborative practices between the different stakeholders in the chain. Methodology/approach – The paper developed a theoretical framework and conducted a qualitative exploratory study approach based on six semi-structured interviews with two international groups in the distribution sector with the aim of understanding and analyzing how companies have changed their supply chains to ensure optimal customer service. Findings/Implications – The study contributes to the Global Supply Chain Management and Collaboration literature by integrating the role of the downstream supply chain into research that may actually influence customer service performance on BTB. Our findings also provide firms with some guidelines on building successful downstream supply chain collaboration and a significant influence on customer service performance in BTB. Because of the exploratory nature of the study approach, the research results are limited to the data collected, and these preliminary findings require further confirmation.Keywords: customer service performance (B2B), global supply chain, downstream supply collaboration, qualitative case study
Procedia PDF Downloads 14812156 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers
Authors: Nishank Raisinghani
Abstract:
Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.Keywords: drug discovery, transformers, graph neural networks, multiomics
Procedia PDF Downloads 15312155 Evaluating Forecasts Through Stochastic Loss Order
Authors: Wilmer Osvaldo Martinez, Manuel Dario Hernandez, Juan Manuel Julio
Abstract:
We propose to assess the performance of k forecast procedures by exploring the distributions of forecast errors and error losses. We argue that non systematic forecast errors minimize when their distributions are symmetric and unimodal, and that forecast accuracy should be assessed through stochastic loss order rather than expected loss order, which is the way it is customarily performed in previous work. Moreover, since forecast performance evaluation can be understood as a one way analysis of variance, we propose to explore loss distributions under two circumstances; when a strict (but unknown) joint stochastic order exists among the losses of all forecast alternatives, and when such order happens among subsets of alternative procedures. In spite of the fact that loss stochastic order is stronger than loss moment order, our proposals are at least as powerful as competing tests, and are robust to the correlation, autocorrelation and heteroskedasticity settings they consider. In addition, since our proposals do not require samples of the same size, their scope is also wider, and provided that they test the whole loss distribution instead of just loss moments, they can also be used to study forecast distributions as well. We illustrate the usefulness of our proposals by evaluating a set of real world forecasts.Keywords: forecast evaluation, stochastic order, multiple comparison, non parametric test
Procedia PDF Downloads 8912154 Competitive Advantage Challenges in the Apparel Manufacturing Industries of South Africa: Application of Porter’s Factor Conditions
Authors: Sipho Mbatha, Anne Mastament-Mason
Abstract:
South African manufacturing global competitiveness was ranked 22nd (out of 38 countries), dropped to 24th in 2013 and is expected to drop further to 25th by 2018. These impacts negatively on the industrialisation project of South Africa. For industrialization to be achieved through labour intensive industries like the Apparel Manufacturing Industries of South Africa (AMISA), South Africa needs to identify and respond to factors negatively impacting on the development of competitive advantage This paper applied factor conditions from Porter’s Diamond Model (1990) to understand the various challenges facing the AMISA. Factor conditions highlighted in Porter’s model are grouped into two groups namely, basic and advance factors. Two AMISA associations representing over 10 000 employees were interviewed. The largest Clothing, Textiles and Leather (CTL) apparel retail group was also interviewed with a government department implementing the industrialisation policy were interviewed The paper points out that while AMISA have basic factor conditions necessary for competitive advantage in the clothing and textiles industries, Advance factor coordination has proven to be a challenging task for the AMISA, Higher Education Institutions (HEIs) and government. Poor infrastructural maintenance has contributed to high manufacturing costs and poor quick response as a result of lack of advanced technologies. The use of Porter’s Factor Conditions as a tool to analyse the sector’s competitive advantage challenges and opportunities has increased knowledge regarding factors that limit the AMISA’s competitiveness. It is therefore argued that other studies on Porter’s Diamond model factors like Demand conditions, Firm strategy, structure and rivalry and Related and supporting industries can be used to analyse the situation of the AMISA for the purposes of improving competitive advantage.Keywords: compliance rule, apparel manufacturing industry, factor conditions, advance skills and South African industrial policy
Procedia PDF Downloads 36212153 Multi-Spectral Deep Learning Models for Forest Fire Detection
Authors: Smitha Haridasan, Zelalem Demissie, Atri Dutta, Ajita Rattani
Abstract:
Aided by the wind, all it takes is one ember and a few minutes to create a wildfire. Wildfires are growing in frequency and size due to climate change. Wildfires and its consequences are one of the major environmental concerns. Every year, millions of hectares of forests are destroyed over the world, causing mass destruction and human casualties. Thus early detection of wildfire becomes a critical component to mitigate this threat. Many computer vision-based techniques have been proposed for the early detection of forest fire using video surveillance. Several computer vision-based methods have been proposed to predict and detect forest fires at various spectrums, namely, RGB, HSV, and YCbCr. The aim of this paper is to propose a multi-spectral deep learning model that combines information from different spectrums at intermediate layers for accurate fire detection. A heterogeneous dataset assembled from publicly available datasets is used for model training and evaluation in this study. The experimental results show that multi-spectral deep learning models could obtain an improvement of about 4.68 % over those based on a single spectrum for fire detection.Keywords: deep learning, forest fire detection, multi-spectral learning, natural hazard detection
Procedia PDF Downloads 24112152 Prediction of the Crustal Deformation of Volcán - Nevado Del RUíz in the Year 2020 Using Tropomi Tropospheric Information, Dinsar Technique, and Neural Networks
Authors: Juan Sebastián Hernández
Abstract:
The Nevado del Ruíz volcano, located between the limits of the Departments of Caldas and Tolima in Colombia, presented an unstable behaviour in the course of the year 2020, this volcanic activity led to secondary effects on the crust, which is why the prediction of deformations becomes the task of geoscientists. In the course of this article, the use of tropospheric variables such as evapotranspiration, UV aerosol index, carbon monoxide, nitrogen dioxide, methane, surface temperature, among others, is used to train a set of neural networks that can predict the behaviour of the resulting phase of an unrolled interferogram with the DInSAR technique, whose main objective is to identify and characterise the behaviour of the crust based on the environmental conditions. For this purpose, variables were collected, a generalised linear model was created, and a set of neural networks was created. After the training of the network, validation was carried out with the test data, giving an MSE of 0.17598 and an associated r-squared of approximately 0.88454. The resulting model provided a dataset with good thematic accuracy, reflecting the behaviour of the volcano in 2020, given a set of environmental characteristics.Keywords: crustal deformation, Tropomi, neural networks (ANN), volcanic activity, DInSAR
Procedia PDF Downloads 10312151 Across-Breed Genetic Evaluation of New Zealand Dairy Goats
Authors: Nicolas Lopez-Villalobos, Dorian J. Garrick, Hugh T. Blair
Abstract:
Many dairy goat farmers of New Zealand milk herds of mixed breed does. Simultaneous evaluation of sires and does across breed is required to select the best animals for breeding on a common basis. Across-breed estimated breeding values (EBV) and estimated producing values for 208-day lactation yields of milk (MY), fat (FY), protein (PY) and somatic cell score (SCS; LOG2(SCC) of Saanen, Nubian, Alpine, Toggenburg and crossbred dairy goats from 75 herds were estimated using a test day model. Evaluations were based on 248,734 herd-test records representing 125,374 lactations from 65,514 does sired by 930 sires over 9 generations. Averages of MY, FY and PY were 642 kg, 21.6 kg and 19.8 kg, respectively. Average SCC and SCS were 936,518 cells/ml milk and 9.12. Pure-bred Saanen does out-produced other breeds in MY, FY and PY. Average EBV for MY, FY and PY compared to a Saanen base were Nubian -98 kg, 0.1 kg and -1.2 kg; Alpine -64 kg, -1.0 kg and -1.7 kg; and Toggenburg -42 kg, -1.0 kg and -0.5 kg. First-cross heterosis estimates were 29 kg MY, 1.1 kg FY and 1.2 kg PY. Average EBV for SCS compared to a Saanen base were Nubian 0.041, Alpine -0.083 and Toggenburg 0.094. Heterosis for SCS was 0.03. Breeding values are combined with respective economic values to calculate an economic index used for ranking sires and does to reflect farm profit.Keywords: breed effects, dairy goats, milk traits, test-day model
Procedia PDF Downloads 33012150 Optimized Dynamic Bayesian Networks and Neural Verifier Test Applied to On-Line Isolated Characters Recognition
Authors: Redouane Tlemsani, Redouane, Belkacem Kouninef, Abdelkader Benyettou
Abstract:
In this paper, our system is a Markovien system which we can see it like a Dynamic Bayesian Networks. One of the major interests of these systems resides in the complete training of the models (topology and parameters) starting from training data. The Bayesian Networks are representing models of dubious knowledge on complex phenomena. They are a union between the theory of probability and the graph theory in order to give effective tools to represent a joined probability distribution on a set of random variables. The representation of knowledge bases on description, by graphs, relations of causality existing between the variables defining the field of study. The theory of Dynamic Bayesian Networks is a generalization of the Bayesians networks to the dynamic processes. Our objective amounts finding the better structure which represents the relationships (dependencies) between the variables of a dynamic bayesian network. In applications in pattern recognition, one will carry out the fixing of the structure which obliges us to admit some strong assumptions (for example independence between some variables).Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, networks
Procedia PDF Downloads 61812149 How Cultural Tourists Perceive Authenticity in World Heritage Historic Centers: An Empirical Research
Authors: Odete Paiva, Cláudia Seabra, José Luís Abrantes, Fernanda Cravidão
Abstract:
There is a clear ‘cult of authenticity’, at least in modern Western society. So, there is a need to analyze the tourist perception of authenticity, bearing in mind the destination, its attractions, motivations, cultural distance, and contact with other tourists. Our study seeks to investigate the relationship among cultural values, image, sense of place, perception of authenticity and behavior intentions at World Heritage Historic Centers. From a theoretical perspective, few researches focus on the impact of cultural values, image and sense of place on authenticity and intentions behavior in tourists. The intention of this study is to help close this gap. A survey was applied to collect data from tourists visiting two World Heritage Historic Centers – Guimarães in Portugal and Cordoba in Spain. Data was analyzed in order to establish a structural equation model (SEM). Discussion centers on the implications of model to theory and managerial development of tourism strategies. Recommendations for destinations managers and promoters and tourist organizations administrators are addressed.Keywords: authenticity perception, behavior intentions, cultural tourism, cultural values, world heritage historic centers
Procedia PDF Downloads 31612148 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, Bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 44512147 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification
Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens
Abstract:
Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage
Procedia PDF Downloads 18912146 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 42512145 Species Diversity of Some Remarkable New Records of Lichens from Chwarta District, Kurdistan Region of Iraq
Authors: Salah Abdulla Salih, Muhamad Sohrabi
Abstract:
This paper aims to study the species, morphology, habitats and geographical distribution of new records of lichens from the Chwarta districts Kurdistan Region of Iraq from September 2022 to June 2023. As a result of the survey in the different locations belonging to the district, found that there are eleven species of lichens are reported from Iraq for the first time. Of these, the taxa of Lecanora thallophila H. Magn is the first species of parasitic lichen reported from Iraq, including the newly recorded genus of lichen Acarospora A.Massal. The lichenized fungi are Acarospora bullata Anzi., Acarospora laqueata Stizenb., Acarospora umbilicata Bagl., Acrocordia gemmata (Ach.) A. Massal., Squamulea subsoluta (Nyl.) Arup, Søchting & Frödén., Coppinsiella ulcerosa (Coppins & P.James) S.Y.Kondr. & L.Lőkös., Myriolecis flowersiana (H. Magn.) Śliwa, Zhao Xin & Lumbsch., Megaspora rimisorediata Valadbeigi & A. Nordin sp. nov., Physcia subtilis Degel., Polycauliona polycarpa (Hoffm.) Frödén, Arup, & Søchting.Keywords: apothecia, lichen species, new records, Kurdistan region
Procedia PDF Downloads 8212144 Designing Sustainable Building Based on Iranian's Windmills
Authors: Negar Sartipzadeh
Abstract:
Energy-conscious design, which coordinates with the Earth ecological systems during its life cycle, has the least negative impact on the environment with the least waste of resources. Due to the increasing in world population as well as the consumption of fossil fuels that cause the production of greenhouse gasses and environmental pollution, mankind is looking for renewable and also sustainable energies. The Iranian native construction is a clear evidence of energy-aware designing. Our predecessors were forced to rely on the natural resources and sustainable energies as well as environmental issues which have been being considered in the recent world. One of these endless energies is wind energy. Iranian traditional architecture foundations is a appropriate model in solving the environmental crisis and the contemporary energy. What will come in this paper is an effort to recognition and introduction of the unique characteristics of the Iranian architecture in the application of aerodynamic and hydraulic energies derived from the wind, which are the most common and major type of using sustainable energies in the traditional architecture of Iran. Therefore, the recent research attempts to offer a hybrid system suggestions for application in new constructions designing in a region such as Nashtifan, which has potential through reviewing windmills and how they deal with sustainable energy sources, as a model of Iranian native construction.Keywords: renewable energy, sustainable building, windmill, Iranian architecture
Procedia PDF Downloads 42212143 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 11312142 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics
Authors: Julia Zimmerman, Gaurav Savant
Abstract:
This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling
Procedia PDF Downloads 19912141 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes
Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun
Abstract:
The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration
Procedia PDF Downloads 8312140 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town
Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid
Abstract:
Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method
Procedia PDF Downloads 16512139 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution
Authors: Haiyan Wu, Ying Liu, Shaoyun Shi
Abstract:
Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction
Procedia PDF Downloads 136