Search results for: intelligent methods
13037 Postoperative Pain Management: Efficacy of Caudal Tramadol in Pediatric Lower Abdominal Surgery: A Randomized Clinical Study
Authors: Reza Farahmand Rad, Farnad Imani, Azadeh Emami, Reza Salehi, Ali Reza Ghavamy, Ali Nima Shariat
Abstract:
Background: One of the methods of pain control after pediatric surgical procedures is regional techniques, including caudal block, despite their limitations. Objectives: In this study, the pain score and complications of caudal tramadol were evaluated in pediatrics following lower abdom- inal surgery. Methods: In this study, 46 children aged 3 to 10 years were allocated into two equal groups (R and TR) for performing caudal anal- gesia after lower abdominal surgery. The injectate contained 0.2% ropivacaine 1 mL/kg in the R group (control group) and tramadol (2 mg/kg) and ropivacaine in the TR group. The pain score, duration of pain relief, amount of paracetamol consumption, hemody- namic alterations, and possible complications at specific times (1, 2, and 6 hours) were evaluated in both groups. Results: No considerable difference was observed in the pain score between the groups in the first and second hours (P > 0.05). However, in the sixth hour, the TR group had a significantly lower pain score than the R group (P < 0.05). Compared to the R group, the TR group had a longer period of analgesia and lower consumption of analgesic drugs (P < 0.05). Heart rate and blood pressure differences were not significant between the two groups (P > 0.05). Similarly, the duration of operation and recovery time were not remarkably different between the two groups (P > 0.05). Complications had no apparent differences between these two groups, as well (P > 0.05). Conclusions: In this study, the addition of tramadol to caudal ropivacaine in pediatric lower abdominal surgery promoted pain relief without complications.Keywords: tramadol, ropivacaine, caudal block, pediatric, lower abdominal surgery, postoperative pain
Procedia PDF Downloads 1413036 Seismic Vulnerability Assessment of Masonry Buildings in Seismic Prone Regions: The Case of Annaba City, Algeria
Authors: Allaeddine Athmani, Abdelhacine Gouasmia, Tiago Ferreira, Romeu Vicente
Abstract:
Seismic vulnerability assessment of masonry buildings is a fundamental issue even for moderate to low seismic hazard regions. This fact is even more important when dealing with old structures such as those located in Annaba city (Algeria), which the majority of dates back to the French colonial era from 1830. This category of buildings is in high risk due to their highly degradation state, heterogeneous materials and intrusive modifications to structural and non-structural elements. Furthermore, they are usually shelter a dense population, which is exposed to such risk. In order to undertake a suitable seismic risk mitigation strategies and reinforcement process for such structures, it is essential to estimate their seismic resistance capacity at a large scale. In this sense, two seismic vulnerability index methods and damage estimation have been adapted and applied to a pilot-scale building area located in the moderate seismic hazard region of Annaba city: The first one based on the EMS-98 building typologies, and the second one derived from the Italian GNDT approach. To perform this task, the authors took the advantage of an existing data survey previously performed for other purposes. The results obtained from the application of the two methods were integrated and compared using a geographic information system tool (GIS), with the ultimate goal of supporting the city council of Annaba for the implementation of risk mitigation and emergency planning strategies.Keywords: Annaba city, EMS98 concept, GNDT method, old city center, seismic vulnerability index, unreinforced masonry buildings
Procedia PDF Downloads 61813035 Green Construction in EGYPT
Authors: Hanan A. Anwar
Abstract:
This paper introduces green building construction in Egypt with different concepts and practices. The following study includes green building applied definition, guidelines, regulations and Standards. Evaluation of cost/benefit of green construction methods and green construction rating systems are presented. Relevant case studies will be reviewed. Four sites will be included.Keywords: green construction, ecofreindly, self-sufficient town, carbon neutral atmosphere
Procedia PDF Downloads 65613034 How Consumers Perceive Health and Nutritional Information and How It Affects Their Purchasing Behavior: Comparative Study between Colombia and the Dominican Republic
Authors: Daniel Herrera Gonzalez, Maria Luisa Montas
Abstract:
There are some factors affecting consumer decision-making regarding the use of the front of package labels in order to find benefits to the well-being of the human being. Currently, there are several labels that help influence or change the purchase decision for food products. These labels communicate the impact that food has on human health; therefore, consumers are more critical and intelligent when buying and consuming food products. The research explores the association between front-of-pack labeling and food choice; the association between label content and purchasing decisions is complex and influenced by different factors, including the packaging itself. The main objective of this study was to examine the perception of health labels and nutritional declarations and their influence on buying decisions in the non-alcoholic beverages sector. This comparative study of two developing countries will show how consumers take nutritional labels into account when deciding to buy certain foods. This research applied a quantitative methodology with correlational scope. This study has a correlational approach in order to analyze the degree of association between variables. Likewise, the confirmatory factor analysis (CFA) method and structural equation modeling (SEM) as a powerful multivariate technique was used as statistical technique to find the relationships between observable and unobservable variables. The main findings of this research were the obtaining of three large groups and their perception and effects on nutritional and wellness labels. The first group is characterized by taking an attitude of high interest on the issue of the imposition of the nutritional information label on products and would agree that all products should be packaged given its importance to preventing illnesses in the consumer. Likewise, they almost always care about the brand, the size, the list of ingredients, and nutritional information of the food, and also the effect of these on health. The second group stands out for presenting some interest in the importance of the label on products as a purchase decision, in addition to almost always taking into account the characteristics of size, money, components, etc. of the products to decide on their consumption and almost always They are never interested in the effect of these products on their health or nutrition, and in group 3, it differs from the others by being more neutral regarding the issue of nutritional information labels, and being less interested in the purchase decision and characteristics of the product and also on the influence of these on health and nutrition. This new knowledge is essential for different companies that manufacture and market food products because they will have information to adapt or anticipate the new laws of developing countries as well as the new needs of health-conscious consumers when they buy food products.Keywords: healthy labels, consumer behavior, nutritional information, healthy products
Procedia PDF Downloads 10713033 Removal of Nickel and Vanadium from Crude Oil by Using Solvent Extraction and Electrochemical Process
Authors: Aliya Kurbanova, Nurlan Akhmetov, Abilmansur Yeshmuratov, Yerzhigit Sugurbekov, Ramiz Zulkharnay, Gulzat Demeuova, Murat Baisariyev, Gulnar Sugurbekova
Abstract:
Last decades crude oils have tended to become more challenge to process due to increasing amounts of sour and heavy crude oils. Some crude oils contain high vanadium and nickel content, for example Pavlodar LLP crude oil, which contains more than 23.09 g/t nickel and 58.59 g/t vanadium. In this study, we used two types of metal removing methods such as solvent extraction and electrochemical. The present research is conducted for comparative analysis of the deasphalting with organic solvents (cyclohexane, carbon tetrachloride, chloroform) and electrochemical method. Applying the cyclic voltametric analysis (CVA) and Inductively coupled plasma mass spectrometry (ICP MS), these mentioned types of metal extraction methods were compared in this paper. Maximum efficiency of deasphalting, with cyclohexane as the solvent, in Soxhlet extractor was 66.4% for nickel and 51.2% for vanadium content from crude oil. Percentage of Ni extraction reached maximum of approximately 55% by using the electrochemical method in electrolysis cell, which was developed for this research and consists of three sections: oil and protonating agent (EtOH) solution between two conducting membranes which divides it from two capsules of 10% sulfuric acid and two graphite electrodes which cover all three parts in electrical circuit. Ions of metals pass through membranes and remain in acid solutions. The best result was obtained in 60 minutes with ethanol to oil ratio 25% to 75% respectively, current fits into the range from 0.3A to 0.4A, voltage changed from 12.8V to 17.3V.Keywords: demetallization, deasphalting, electrochemical removal, heavy metals, petroleum engineering, solvent extraction
Procedia PDF Downloads 32613032 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 20813031 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry
Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du
Abstract:
The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry
Procedia PDF Downloads 13713030 Local Cultural Beliefs and Practices of the Indiginous Communities Related to Wildlife in the Buffer Zone of Chitwan National Park
Authors: Neeta Pokharel
Abstract:
Cultural beliefs and practices have been shaping indigenous community’s resource use and attitude toward the conservation of natural flora and fauna around them. Understanding these cultural dimensions is vital for identifying effective strategies that align with conservation efforts. This study focused on investigating the wildlife-related cultural beliefs and practices of two indigenous communities: Bote and Musahars. The study applied ethnographic methods that included Key-informant interviews, Focal Group discussion, and Household survey methods. Out of 100 respondents, 51% were male and 49% female. A significant portion (65%) of the respondents confirmed animal worship, with a majority worshipping tigers (81.5%), rhinos (73.8%), crocodiles (66%), and dolphins (40%). Additionally, 16.9% disclosed worshipping Elephants, while 10 % affirmed animal worship without specifying the particular animals. Ritualistic practices often involve the sacrifice of pigs, goats, hens, and pigeons. Their cultural ethics place a significant emphasis on biodiversity conservation, as the result shows 41 % refraining from causing harm to wild animals and 9% doing so for ethical considerations, respectively. Moreover, the majority of the respondents believe that cultural practices could enhance conservation efforts. However, the encroachment of modernization and religious conversion within the community poses a tangible risk of cultural degradation, highlighting the urgent need to preserve the cultural practices. Integrating such indigenous practices into the National Biodiversity Strategy and conservation policies can ensure sustainable conservation of endangered animals with appropriate cultural safeguards.Keywords: tribal communities, societal belief, wild fauna, “barana”, safeguarding
Procedia PDF Downloads 8213029 Evaluating Forecasting Strategies for Day-Ahead Electricity Prices: Insights From the Russia-Ukraine Crisis
Authors: Alexandra Papagianni, George Filis, Panagiotis Papadopoulos
Abstract:
The liberalization of the energy market and the increasing penetration of fluctuating renewables (e.g., wind and solar power) have heightened the importance of the spot market for ensuring efficient electricity supply. This is further emphasized by the EU’s goal of achieving net-zero emissions by 2050. The day-ahead market (DAM) plays a key role in European energy trading, accounting for 80-90% of spot transactions and providing critical insights for next-day pricing. Therefore, short-term electricity price forecasting (EPF) within the DAM is crucial for market participants to make informed decisions and improve their market positioning. Existing literature highlights out-of-sample performance as a key factor in assessing EPF accuracy, with influencing factors such as predictors, forecast horizon, model selection, and strategy. Several studies indicate that electricity demand is a primary price determinant, while renewable energy sources (RES) like wind and solar significantly impact price dynamics, often lowering prices. Additionally, incorporating data from neighboring countries, due to market coupling, further improves forecast accuracy. Most studies predict up to 24 steps ahead using hourly data, while some extend forecasts using higher-frequency data (e.g., half-hourly or quarter-hourly). Short-term EPF methods fall into two main categories: statistical and computational intelligence (CI) methods, with hybrid models combining both. While many studies use advanced statistical methods, particularly through different versions of traditional AR-type models, others apply computational techniques such as artificial neural networks (ANNs) and support vector machines (SVMs). Recent research combines multiple methods to enhance forecasting performance. Despite extensive research on EPF accuracy, a gap remains in understanding how forecasting strategy affects prediction outcomes. While iterated strategies are commonly used, they are often chosen without justification. This paper contributes by examining whether the choice of forecasting strategy impacts the quality of day-ahead price predictions, especially for multi-step forecasts. We evaluate both iterated and direct methods, exploring alternative ways of conducting iterated forecasts on benchmark and state-of-the-art forecasting frameworks. The goal is to assess whether these factors should be considered by end-users to improve forecast quality. We focus on the Greek DAM using data from July 1, 2021, to March 31, 2022. This period is chosen due to significant price volatility in Greece, driven by its dependence on natural gas and limited interconnection capacity with larger European grids. The analysis covers two phases: pre-conflict (January 1, 2022, to February 23, 2022) and post-conflict (February 24, 2022, to March 31, 2022), following the Russian-Ukraine conflict that initiated an energy crisis. We use the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (sMAPE) for evaluation, as well as the Direction of Change (DoC) measure to assess the accuracy of price movement predictions. Our findings suggest that forecasters need to apply all strategies across different horizons and models. Different strategies may be required for different horizons to optimize both accuracy and directional predictions, ensuring more reliable forecasts.Keywords: short-term electricity price forecast, forecast strategies, forecast horizons, recursive strategy, direct strategy
Procedia PDF Downloads 813028 Development of Methods for Plastic Injection Mold Weight Reduction
Authors: Bita Mohajernia, R. J. Urbanic
Abstract:
Mold making techniques have focused on meeting the customers’ functional and process requirements; however, today, molds are increasing in size and sophistication, and are difficult to manufacture, transport, and set up due to their size and mass. Presently, mold weight saving techniques focus on pockets to reduce the mass of the mold, but the overall size is still large, which introduces costs related to the stock material purchase, processing time for process planning, machining and validation, and excess waste materials. Reducing the overall size of the mold is desirable for many reasons, but the functional requirements, tool life, and durability cannot be compromised in the process. It is proposed to use Finite Element Analysis simulation tools to model the forces, and pressures to determine where the material can be removed. The potential results of this project will reduce manufacturing costs. In this study, a light weight structure is defined by an optimal distribution of material to carry external loads. The optimization objective of this research is to determine methods to provide the optimum layout for the mold structure. The topology optimization method is utilized to improve structural stiffness while decreasing the weight using the OptiStruct software. The optimized CAD model is compared with the primary geometry of the mold from the NX software. Results of optimization show an 8% weight reduction while the actual performance of the optimized structure, validated by physical testing, is similar to the original structure.Keywords: finite element analysis, plastic injection molding, topology optimization, weight reduction
Procedia PDF Downloads 29013027 Transdisciplinarity Research Approach and Transit-Oriented Development Model for Urban Development Integration in South African Cities
Authors: Thendo Mafame
Abstract:
There is a need for academic research to focus on solving or contributing to solving real-world societal problems. Transdisciplinary research (TDR) provides a way to produce functional and applicable research findings, which can be used to advance developmental causes. This TDR study explores ways in which South Africa’s spatial divide, entrenched through decades of discriminatory planning policies, can be restructured to bring about equitable access to places of employment, business, leisure, and service for previously marginalised South Africans. It does by exploring the potential of the transit-orientated development (TOD) model to restructure and revitalise urban spaces in a collaborative model. The study focuses, through a case study, on the Du Toit station precinct in the town of Stellenbosch, on the peri-urban edge of the city of Cape Town, South Africa. The TOD model is increasingly viewed as an effective strategy for creating sustainable urban redevelopment initiatives, and it has been deployed successfully in other parts of the world. The model, which emphasises development density, diversity of land-use and infrastructure and transformative design, is customisable to a variety of country contexts. This study made use of case study approach with mixed methods to collect and analyse data. Various research methods used include the above-mentioned focus group discussions and interviews, as well as observation, transect walks This research contributes to the professional development of TDR studies that are focused on urbanisation issues.Keywords: case study, integrated urban development, land-use, stakeholder collaboration, transit-oriented development, transdisciplinary research
Procedia PDF Downloads 13213026 Border Control and Human Rights Violations: Lessons Learned from the United States and Potential Solutions for the European Union
Authors: María Elena Menéndez Ibáñez
Abstract:
After the terrorist attacks of 9/11, new measures were adopted by powerful countries and regions like the United States and the European Union in order to safeguard their security. In 2002, the US created the Department of Homeland Security with one sole objective; to protect American soil and people. The US adopted new policies that made every immigrant a potential terrorist and a threat to their national security. Stronger border control became one of the key elements of the fight against organized crime and terrorism. The main objective of this paper is to compare some of the most important and radical measures adopted by the US, even those that resulted in systematic violations of human rights, with some of the European measures adopted after the 2015 Paris attacks of 2015, such as unlawful detainment of prisoners and other measures against foreigners. Through the Schengen agreement, the European Union has tried to eliminate tariffs and border controls, in order to guarantee successful economic growth. Terrorists have taken advantage of this and have made the region vulnerable to attacks. Authorities need to strengthen their surveillance methods in order to safeguard the region and its stability. Through qualitative methods applied to social sciences, this research will also try to explain why some of the mechanisms proven to be useful in the US would not be so in Europe, especially because they would result in human rights violations. Finally, solutions will be offered that would not put the whole Schengen Agreement at risk. Europe cannot reinstate border control, without making individuals vulnerable to human rights violations.Keywords: border control, immigration, international cooperation, national security
Procedia PDF Downloads 13813025 Count of Trees in East Africa with Deep Learning
Authors: Nubwimana Rachel, Mugabowindekwe Maurice
Abstract:
Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization
Procedia PDF Downloads 7113024 The Influence of Environmental Factors on Honey Bee Activities: A Quantitative Analysis
Authors: Hung-Jen Lin, Chien-Hao Wang, Chien-Peng Huang, Yu-Sheng Tseng, En-Cheng Yang, Joe-Air Jiang
Abstract:
Bees’ incoming and outgoing behavior is a decisive index which can indicate the health condition of a colony. Traditional methods for monitoring the behavior of honey bees (Apis mellifera) take too much time and are highly labor-intensive, and the lack of automation and synchronization disables researchers and beekeepers from obtaining real-time information of beehives. To solve these problems, this study proposes to use an Internet of Things (IoT)-based system for counting honey bees’ incoming and outgoing activities using an infrared interruption technique, while environmental factors are recorded simultaneously. The accuracy of the established system is verified by comparing the counting results with the outcomes of manual counting. Moreover, this highly -accurate device is appropriate for providing quantitative information regarding honey bees’ incoming and outgoing behavior. Different statistical analysis methods, including one-way ANOVA and two-way ANOVA, are used to investigate the influence of environmental factors, such as temperature, humidity, illumination and ambient pressure, on bees’ incoming and outgoing behavior. With the real-time data, a standard model is established using the outcomes from analyzing the relationship between environmental factors and bees’ incoming and outgoing behavior. In the future, smart control systems, such as a temperature control system, can also be combined with the proposed system to create an appropriate colony environment. It is expected that the proposed system will make a considerable contribution to the apiculture and researchers.Keywords: ANOVA, environmental factors, honey bee, incoming and outgoing behavior
Procedia PDF Downloads 36813023 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science
Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier
Abstract:
Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and comparedKeywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis
Procedia PDF Downloads 11713022 The Impact of Window Opening Occupant Behavior Models on Building Energy Performance
Authors: Habtamu Tkubet Ebuy
Abstract:
Purpose Conventional dynamic energy simulation tools go beyond the static dimension of simplified methods by providing better and more accurate prediction of building performance. However, their ability to forecast actual performance is undermined by a low representation of human interactions. The purpose of this study is to examine the potential benefits of incorporating information on occupant diversity into occupant behavior models used to simulate building performance. The co-simulation of the stochastic behavior of the occupants substantially increases the accuracy of the simulation. Design/methodology/approach In this article, probabilistic models of the "opening and closing" behavior of the window of inhabitants have been developed in a separate multi-agent platform, SimOcc, and implemented in the building simulation, TRNSYS, in such a way that the behavior of the window with the interconnectivity can be reflected in the simulation analysis of the building. Findings The results of the study prove that the application of complex behaviors is important to research in predicting actual building performance. The results aid in the identification of the gap between reality and existing simulation methods. We hope this study and its results will serve as a guide for researchers interested in investigating occupant behavior in the future. Research limitations/implications Further case studies involving multi-user behavior for complex commercial buildings need to more understand the impact of the occupant behavior on building performance. Originality/value This study is considered as a good opportunity to achieve the national strategy by showing a suitable tool to help stakeholders in the design phase of new or retrofitted buildings to improve the performance of office buildings.Keywords: occupant behavior, co-simulation, energy consumption, thermal comfort
Procedia PDF Downloads 10413021 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures
Authors: Filippo Ranalli, Forest Flager, Martin Fischer
Abstract:
This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures
Procedia PDF Downloads 34113020 Azadrachea indica Leaves Extract Assisted Green Synthesis of Ag-TiO₂ for Degradation of Dyes in Aqueous Medium
Authors: Muhammad Saeed, Sheeba Khalid
Abstract:
Aqueous pollution due to the textile industry is an important issue. Photocatalysis using metal oxides as catalysts is one of the methods used for eradication of dyes from textile industrial effluents. In this study, the synthesis, characterization, and evaluation of photocatalytic activity of Ag-TiO₂ are reported. TiO₂ catalysts with 2, 4, 6 and 8% loading of Ag were prepared by green methods using Azadrachea indica leaves' extract as reducing agent and titanium dioxide and silver nitrate as precursor materials. The 4% Ag-TiO₂ exhibited the best catalytic activity for degradation of dyes. Prepared catalyst was characterized by advanced techniques. Catalytic degradation of methylene blue and rhodamine B were carried out in Pyrex glass batch reactor. Deposition of Ag greatly enhanced the catalytic efficiency of TiO₂ towards degradation of dyes. Irradiation of catalyst excites electrons from conduction band of catalyst to valence band yielding an electron-hole pair. These photoexcited electrons and positive hole undergo secondary reaction and produce OH radicals. These active radicals take part in the degradation of dyes. More than 90% of dyes were degraded in 120 minutes. It was found that there was no loss catalytic efficiency of prepared Ag-TiO₂ after recycling it for two times. Photocatalytic degradation of methylene blue and rhodamine B followed Eley-Rideal mechanism which states that dye reacts in fluid phase with adsorbed oxygen. 27 kJ/mol and 20 kJ/mol were found as activation energy for photodegradation of methylene blue and rhodamine B dye respectively.Keywords: TiO₂, Ag-TiO₂, methylene blue, Rhodamine B., photo degradation
Procedia PDF Downloads 16513019 Methodological Proposal, Archival Thesaurus in Colombian Sign Language
Authors: Pedro A. Medina-Rios, Marly Yolie Quintana-Daza
Abstract:
Having the opportunity to communicate in a social, academic and work context is very relevant for any individual and more for a deaf person when oral language is not their natural language, and written language is their second language. Currently, in Colombia, there is not a specialized dictionary for our best knowledge in sign language archiving. Archival is one of the areas that the deaf community has a greater chance of performing. Nourishing new signs in dictionaries for deaf people extends the possibility that they have the appropriate signs to communicate and improve their performance. The aim of this work was to illustrate the importance of designing pedagogical and technological strategies of knowledge management, for the academic inclusion of deaf people through proposals of lexicon in Colombian sign language (LSC) in the area of archival. As a method, the analytical study was used to identify relevant words in the technical area of the archival and its counterpart with the LSC, 30 deaf people, apprentices - students of the Servicio Nacional de Aprendizaje (SENA) in Documentary or Archival Management programs, were evaluated through direct interviews in LSC. For the analysis tools were maintained to evaluate correlation patterns and linguistic methods of visual, gestural analysis and corpus; besides, methods of linear regression were used. Among the results, significant data were found among the variables socioeconomic stratum, academic level, labor location. The need to generate new signals on the subject of the file to improve communication between the deaf person, listener and the sign language interpreter. It is concluded that the generation of new signs to nourish the LSC dictionary in archival subjects is necessary to improve the labor inclusion of deaf people in Colombia.Keywords: archival, inclusion, deaf, thesaurus
Procedia PDF Downloads 27813018 Sudden Death and Chronic Disseminated Intravascular Coagulation (DIC): Two Case Reports
Authors: Saker Lilia, Youcef Mellouki, Lakhdar Sellami, Yacine Zerairia, Abdelhaid Zetili, Fatma Guahria, Fateh Kaious, Nesrine Belkhodja, Abdelhamid Mira
Abstract:
Background: Sudden death is regarded as a suspicious demise necessitating autopsy, as stipulated by legal authorities. Chronic disseminated intravascular coagulation (DIC) is an acquired clinical and biological syndrome characterized by a severe and fatal prognosis, stemming from systemic, uncontrolled, diffuse coagulation activation. Irrespective of their origins, DIC is associated with a diverse spectrum of manifestations, encompassing minor biological coagulation alterations to profoundly severe conditions wherein hemorrhagic complications may take precedence. Simultaneously, microthrombi contribute to the development of multi-organ failures. Objective This study seeks to evaluate the role of autopsy in determining the causes of death. Materials and Methods: We present two instances of sudden death involving females who underwent autopsy at the Forensic Medicine Department of the University Hospital of Annaba, Algeria. These autopsies were performed at the request of the prosecutor, aiming to determine the causes of death and illuminate the exact circumstances surrounding it. Methods Utilized: Analysis of the initial information report; Findings from postmortem examinations; Histological assessments and toxicological analyses. Results: The presence of DIC was noted, affecting nearly all veins with distinct etiologies. Conclusion: For the establishment of a meaningful diagnosis: • Thorough understanding of the subject matter is imperative; • Precise alignment with medicolegal data is essential.Keywords: chronic disseminated intravascular coagulation, sudden death, autopsy, causes of death
Procedia PDF Downloads 8513017 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: hough forest, active shape model, segmentation, cardiac left ventricle
Procedia PDF Downloads 33913016 Heuristics for Optimizing Power Consumption in the Smart Grid
Authors: Zaid Jamal Saeed Almahmoud
Abstract:
Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.Keywords: heuristics, optimization, smart grid, peak demand, power supply
Procedia PDF Downloads 8813015 The Reliability of Wireless Sensor Network
Authors: Bohuslava Juhasova, Igor Halenar, Martin Juhas
Abstract:
The wireless communication is one of the widely used methods of data transfer at the present days. The benefit of this communication method is the partial independence of the infrastructure and the possibility of mobility. In some special applications it is the only way how to connect. This paper presents some problems in the implementation of a sensor network connection for measuring environmental parameters in the area of manufacturing plants.Keywords: network, communication, reliability, sensors
Procedia PDF Downloads 65213014 Climate Change and Sustainable Development among Agricultural Communities in Tanzania; An Analysis of Southern Highland Rural Communities
Authors: Paschal Arsein Mugabe
Abstract:
This paper examines sustainable development planning in the context of environmental concerns in rural areas of the Tanzania. It challenges mainstream approaches to development, focusing instead upon transformative action for environmental justice. The goal is to help shape future sustainable development agendas in local government, international agencies and civil society organisations. Research methods: The approach of the study is geographical, but also involves various Trans-disciplinary elements, particularly from development studies, sociology and anthropology, management, geography, agriculture and environmental science. The research methods included thematic and questionnaire interviews, participatory tools such as focus group discussion, participatory research appraisal and expert interviews for primary data. Secondary data were gathered through the analysis of land use/cover data and official documents on climate, agriculture, marketing and health. Also several earlier studies that were made in the area provided an important reference base. Findings: The findings show that, agricultural sustainability in Tanzania appears likely to deteriorate as a consequence of climate change. Noteworthy differences in impacts across households are also present both by district and by income category. Also food security cannot be explained by climate as the only influencing factor. A combination of economic, political and socio-cultural context of the community are crucial. Conclusively, it is worthy knowing that people understand their relationship between climate change and their livelihood.Keywords: agriculture, climate change, environment, sustainable development
Procedia PDF Downloads 32513013 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction
Authors: Lucas Peries, Rolla Monib
Abstract:
The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.Keywords: building information modelling, modularisation, prefabrication, technology
Procedia PDF Downloads 9813012 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights
Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy
Abstract:
The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems
Procedia PDF Downloads 7413011 Management of ASD with Co-Morbid OCD: A Literature Review to Compare the Pharmacological and Psychological Treatment Options in Individuals Under the Age of 18
Authors: Melissa Nelson, Simran Jandu, Hana Jalal, Mia Ingram, Chrysi Stefanidou
Abstract:
There is a significant overlap between autism spectrum disorder (ASD) and obsessive compulsive disorder (OCD), with up to 90% of young people diagnosed with ASD having this co-morbidity. Distinguishing between the symptoms of the two leads to issues with accurate treatment, yet this is paramount in benefitting the young person. There are two distinct methods of treatment, psychological or pharmacological, with clinicians tending to choose one or the other, potentially due to the lack of research available. This report reviews the efficacy of psychological and pharmacological treatments for young people diagnosed with ASD and co-morbid OCD. A literature review was performed on papers from the last fifteen years, including “ASD,” “OCD,” and individuals under the age of 18. Eleven papers were selected as relevant. The report looks at the comparison between more traditional methods, such as selective serotonin reuptake inhibitors (SSRI) and cognitive behavior therapy (CBT), and newer therapies, such as modified or intensive ASD-focused psychotherapies and the use of other medication classes. On reviewing the data, it was identified that there was a distinct lack of information on this important topic. The most widely used treatment was medication such as Fluoxetine, an SSRI, which rarely showed an improvement in symptoms or outcomes. This is in contrast to modified forms of CBT, which often reduces symptoms or even results in OCD remission. With increased research into the non-traditional management of these co-morbid conditions, it is clear there is scope that modified CBT may become the future treatment of choice for OCD in young people with ASD.Keywords: autism spectrum disorder, intensive or adapted cognitive behavioral therapy, obsessive compulsive disorder, pharmacological management
Procedia PDF Downloads 913010 Characterization of 2,4,6-Trinitrotoluene (Tnt)-Metabolizing Bacillus Cereus Sp TUHP2 Isolated from TNT-Polluted Soils in the Vellore District, Tamilnadu, India
Authors: S. Hannah Elizabeth, A. Panneerselvam
Abstract:
Objective: The main objective was to evaluate the degradative properties of Bacillus cereus sp TUHP2 isolated from TNT-Polluted soils in the Vellore District, Tamil Nadu, India. Methods: Among the 3 bacterial genera isolated from different soil samples, one potent TNT degrading strain Bacillus cereus sp TUHP2 was identified. The morphological, physiological and the biochemical properties of the strain Bacillus cereus sp TUHP2 was confirmed by conventional methods and genotypic characterization was carried out using 16S r-DNA partial gene amplification and sequencing. The broken down by products of DNT in the extract was determined by Gas Chromatogram- Mass spectrometry (GC-MS). Supernatant samples from the broth studied at 24 h interval were analyzed by HPLC analysis and the effect on various nutritional and environmental factors were analysed and optimized for the isolate. Results: Out of three isolates one strain TUHP2 were found to have potent efficiency to degrade TNT and revealed the genus Bacillus. 16S rDNA gene sequence analysis showed highest homology (98%) with Bacillus cereus and was assigned as Bacillus cereus sp TUHP2. Based on the energy of the predicted models, the secondary structure predicted by MFE showed the more stable structure with a minimum energy. Products of TNT Transformation showed colour change in the medium during cultivation. TNT derivates such as 2HADNT and 4HADNT were detected by HPLC chromatogram and 2ADNT, 4ADNT by GC/MS analysis. Conclusion: Hence this study presents the clear evidence for the biodegradation process of TNT by strain Bacillus cereus sp TUHP2.Keywords: bioremediation, biodegradation, biotransformation, sequencing
Procedia PDF Downloads 46213009 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 10313008 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 122