Search results for: fuzzy set Models
4524 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 1574523 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 904522 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach
Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani
Abstract:
Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery
Procedia PDF Downloads 3074521 Leisure, Domestic or Professional Activities so as to Prevent Cognitive Decline: Results FreLE Longitudinal Study
Authors: Caroline Dupre, David Hupin, Christ Goumou, Francois Belan, Frederic Roche, Thomas Celarier, Bienvenu Bongue
Abstract:
Background: Previous cohorts have been notably criticized for not studying the different type of physical activity and not investigating household activities. The objective of this work was to analyse the relationship between physical activity and cognitive decline in older people living in the community. Impact of type of physical activity on the results has been realised. Methods: The study used data from the longitudinal and observational study , FrèLE (FRagility: Longitudinal Study of Expressions). The collected data included: socio-demographic variables, lifestyle, and health status (frailty, comorbidities, cognitive status, depression). Cognitive decline was assessed by using: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Physical activity was assessed by the Physical Activity Scale for the Elderly (PASE). This tool is structured in three sections: the leisure activity, domestic activity, and professional activity. Logistic regressions and proportional hazards regression models (Cox) were used to estimate the risk of cognitive disorders. Results: At baseline, the prevalence of cognitive disorders was 6.9% according to MMSE. In total, 1167 participants without cognitive disorders were included in the analysis. The mean age was 77.4 years, and 52.1% of the participants were women. After a 2 years long follow-up, we found cognitive disorders on 53 participants (4.5%). Physical activity at baseline is lower in older adults for whom cognitive decline was observed after two years of follow-up. Subclass analyses showed that leisure and domestic activities were associated with cognitive decline, but not professional activities. Conclusions: Analysis showed a relationship between cognitive disorders and type of physical activity. The current study will be completed by the MoCA for mild cognitive impairment. These findings compared to other ongoing studies, will contribute to the debate on the beneficial effects of physical activity on cognition.Keywords: aging, cognitive function, physical activity, mixed models
Procedia PDF Downloads 1264520 An Ecological Systems Approach to Risk and Protective Factors of Sibling Conflict for Children in the United Kingdom
Authors: C. A. Bradley, D. Patsios, D. Berridge
Abstract:
This paper presents evidence to better understand the risk and protective factors related to sibling conflict and the patterns of association between sibling conflict and negative adjustment outcomes by incorporating additional familial and societal factors within statistical models of risk and adjustment. It was conducted through the secondary analysis of a large representative cross-sectional dataset of children in the UK. The original study includes proxy interviews for young children and self-report interviews for adolescents. The study applies an ecological systems framework for the analyses. Hierarchical regression models assess risk and protective factors and adjustment outcomes associated with sibling conflict. Interactions reveal differential effect between contextual risk factors and the social context of influence. The general pattern of findings suggested that, although factors affecting likelihood of experiencing sibling conflict were often determined by child age, some remained consistent across childhood. These factors were often conditional on each other, reinforcing the importance of an ecological framework. Across both age-groups, sibling conflict was associated with siblings closer in age; male sibling groups; most advantaged socio-economic group; and exposure to community violence, such as witnessing violent assault or robbery. The study develops the evidence base on the influence of ethnicity and socio-economic group on sibling conflict by exploring interactions between social context. It also identifies key new areas of influence – such as family structure, disability, and community violence in exacerbating or reducing risk of conflict. The study found negative associations between sibling conflict and young children’s mental well-being and adolescents' mental well-being and anti-social behaviour, but also more context specific associations – such as sibling conflict moderating the negative impact of adversity and high risk experiences for young children such as parental violence toward the child.Keywords: adjustment, conflict, ecological systems, family systems, risk and protective factors, sibling
Procedia PDF Downloads 1074519 Space Weather and Earthquakes: A Case Study of Solar Flare X9.3 Class on September 6, 2017
Authors: Viktor Novikov, Yuri Ruzhin
Abstract:
The studies completed to-date on a relation of the Earth's seismicity and solar processes provide the fuzzy and contradictory results. For verification of an idea that solar flares can trigger earthquakes, we have analyzed a case of a powerful surge of solar flash activity early in September 2017 during approaching the minimum of 24th solar cycle was accompanied by significant disturbances of space weather. On September 6, 2017, a group of sunspots AR2673 generated a large solar flare of X9.3 class, the strongest flare over the past twelve years. Its explosion produced a coronal mass ejection partially directed towards the Earth. We carried out a statistical analysis of the catalogs of earthquakes USGS and EMSC for determination of the effect of solar flares on global seismic activity. New evidence of earthquake triggering due to the Sun-Earth interaction has been demonstrated by simple comparison of behavior of Earth's seismicity before and after the strong solar flare. The global number of earthquakes with magnitude of 2.5 to 5.5 within 11 days after the solar flare has increased by 30 to 100%. A possibility of electric/electromagnetic triggering of earthquake due to space weather disturbances is supported by results of field and laboratory studies, where the earthquakes (both natural and laboratory) were initiated by injection of electrical current into the Earth crust. For the specific case of artificial electric earthquake triggering the current density at a depth of earthquake, sources are comparable with estimations of a density of telluric currents induced by variation of space weather conditions due to solar flares. Acknowledgment: The work was supported by RFBR grant No. 18-05-00255.Keywords: solar flare, earthquake activity, earthquake triggering, solar-terrestrial relations
Procedia PDF Downloads 1434518 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 994517 Efficacy and Mechanisms of Acupuncture for Depression: A Meta-Analysis of Clinical and Preclinical Evidence
Authors: Yimeng Zhang
Abstract:
Major depressive disorder (MDD) is a prevalent mental health condition with a substantial economic impact and limited treatment options. Acupuncture has gained attention as a promising non-pharmacological intervention for alleviating depressive symptoms. However, its mechanisms and clinical effectiveness remain incompletely understood. This meta-analysis aims to (1) synthesize existing evidence on the mechanisms and clinical effectiveness of acupuncture for depression and (2) compare these findings with pharmacological interventions, providing insights for future research. Evidence from animal models and clinical studies indicates that acupuncture may enhance hippocampal and network neuroplasticity and reduce brain inflammation, potentially alleviating depressive disorders. Clinical studies suggest that acupuncture can effectively relieve primary depression, particularly in milder cases, and is beneficial in managing post-stroke depression, pain-related depression, and postpartum depression, both as a standalone and adjunctive treatment. Notably, combining acupuncture with antidepressant pharmacotherapy appears to enhance treatment outcomes and reduce medication side effects, addressing a critical issue in conventional drug therapy's high dropout rates. This meta-analysis, encompassing 12 studies and 710 participants, draws data from eight digital databases (PubMed, EMBASE, Web of Science, EBSCOhost, CNKI, CBM, Wangfang, and CQVIP) covering the period from 2012 to 2022. Utilizing Stata software 15.0, the meta-analysis employed random-effects and fixed-effects models to assess the distribution of depression in Traditional Chinese Medicine (TCM). The results underscore the substantial evidence supporting acupuncture's beneficial effects on depression. However, the small sample sizes of many clinical trials raise concerns about the generalizability of the findings, indicating a need for further research to validate these outcomes and optimize acupuncture's role in treating depression.Keywords: Chinese medicine, acupuncture, depression, meta-analysis
Procedia PDF Downloads 354516 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 3284515 Acoustic Finite Element Analysis of a Slit Model with Consideration of Air Viscosity
Authors: M. Sasajima, M. Watanabe, T. Yamaguchi Y. Kurosawa, Y. Koike
Abstract:
In very narrow pathways, the speed of sound propagation and the phase of sound waves change due to the air viscosity. We have developed a new Finite Element Method (FEM) that includes the effects of air viscosity for modeling a narrow sound pathway. This method is developed as an extension of the existing FEM for porous sound-absorbing materials. The numerical calculation results for several three-dimensional slit models using the proposed FEM are validated against existing calculation methods.Keywords: simulation, FEM, air viscosity, slit
Procedia PDF Downloads 3694514 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 194513 Multi-National Corporations and International Communication. An Analysis of Arçelik globals’ Online Presences
Authors: Aisha Iddrsiu
Abstract:
Public Relations (PR) has rapidly evolved around the world, just as companies have expanded to reach other parts of the world. With most multinational corporations conducting businesses in more than one country, only a few of these Multinational Corporations (MNC’s) are actual public relations firms, many have public relations departments or divisions that conduct public relations practices internationally. Hence international public relations is seen as a fast-growing specialty in the field of Public Relations. Multinational companies have devised strategies to effectively communicate and execute their roles within and between foreign publics and other cultures in which they operate through various means including the internet which is among the major inventions that have enabled corporations to establish their presents while targeting anonymous and diverse publics from varied cultures. International public relations practitioners rely on strategies coupled with internet use to communicate among and with foreign publics. Corporate websites and various social media handles have served as an important channel for public relations activities targeting both internal and international publics. In an incessant expansion of corporations and interactions with the publics from different cultures, it has become eminent to understand the public relation strategies used by MNCs in their international communication. This study therefore seeks to establish the international public relation strategies or models employed by Multinational Corporations specifically Arcelik Global in the management of its subsidiaries and communicating with international public. This study analyses both Arçelik global’s (one of the largest multinational companies in Turkey) website and social media accounts to understand the management strategy used with it subsidiary as well as strategies used to communicate with its global and local publics. Other underlying objective of this study are, 1. To examine the dominant international public relations models used by Multinational Corporations (Arcelik global). 2. To understand how Multinational Corporations manage (Arcelik global) its subsidiaries. 3. To understand how Multinational Corporations (Arcelik global) communicate with international or global publics. Research Questions 1. The main global PR strategies employed by multinational corporations (Arcelik global) 2. How subsidiaries of multinational corporations like Arcelik Global are managed. 3. How multinational corporations, like Arcelik worldwide, interact with international publics.Keywords: multinational corporation, ethnocentric model, polycentric model, international public relations
Procedia PDF Downloads 854512 Dynamic Model for Forecasting Rainfall Induced Landslides
Authors: R. Premasiri, W. A. H. A. Abeygunasekara, S. M. Hewavidana, T. Jananthan, R. M. S. Madawala, K. Vaheeshan
Abstract:
Forecasting the potential for disastrous events such as landslides has become one of the major necessities in the current world. Most of all, the landslides occurred in Sri Lanka are found to be triggered mostly by intense rainfall events. The study area is the landslide near Gerandiella waterfall which is located by the 41st kilometer post on Nuwara Eliya-Gampala main road in Kotmale Division in Sri Lanka. The landslide endangers the entire Kotmale town beneath the slope. Geographic Information System (GIS) platform is very much useful when it comes to the need of emulating the real-world processes. The models are used in a wide array of applications ranging from simple evaluations to the levels of forecast future events. This project investigates the possibility of developing a dynamic model to map the spatial distribution of the slope stability. The model incorporates several theoretical models including the infinite slope model, Green Ampt infiltration model and Perched ground water flow model. A series of rainfall values can be fed to the model as the main input to simulate the dynamics of slope stability. Hydrological model developed using GIS is used to quantify the perched water table height, which is one of the most critical parameters affecting the slope stability. Infinite slope stability model is used to quantify the degree of slope stability in terms of factor of safety. DEM was built with the use of digitized contour data. Stratigraphy was modeled in Surfer using borehole data and resistivity images. Data available from rainfall gauges and piezometers were used in calibrating the model. During the calibration, the parameters were adjusted until a good fit between the simulated ground water levels and the piezometer readings was obtained. This model equipped with the predicted rainfall values can be used to forecast of the slope dynamics of the area of interest. Therefore it can be investigated the slope stability of rainfall induced landslides by adjusting temporal dimensions.Keywords: factor of safety, geographic information system, hydrological model, slope stability
Procedia PDF Downloads 4234511 3D-Shape-Perception Studied Exemplarily with Tetrahedron and Icosahedron as Prototypes of the Polarities Sharp versus Round
Authors: Iris Sauerbrei, Jörg Trojan, Erich Lehner
Abstract:
Introduction and significance of the study: This study examines if three-dimensional shapes elicit distinct patterns of perceptions. If so, it is relevant for all fields of design, especially for the design of the built environment. Description of basic methodologies: The five platonic solids are the geometrical base for all other three-dimensional shapes, among which tetrahedron and icosahedron provide the clearest representation of the qualities sharp and round. The component pair of attributes ‘sharp versus round’ has already been examined in various surveys in a psychology of perception and in neuroscience by means of graphics, images of products of daily use, as well as by photographs and walk-through-videos of landscapes and architecture. To verify a transfer of outcomes of the existing surveys to the perception of three-dimensional shapes, walk-in models (total height 2.2m) of tetrahedron and icosahedron were set up in a public park in Frankfurt am Main, Germany. Preferences of park visitors were tested by questionnaire; also they were asked to write down associations in a free text. In summer 2015, the tetrahedron was assembled eight times, the icosahedron seven times. In total 288 participants took part in the study; 116 rated the tetrahedron, 172 rated the icosahedron. Findings: Preliminary analyses of the collected data using Wilcoxon Rank-Sum tests show that the perceptions of the two solids differ in respect to several attributes and that each of the tested model show significance for specific attributes. Conclusion: These findings confirm the assumptions and provide first evidence that the perception of three-dimensional shapes are associated to characteristic attributes and to which. In order to enable conscious choices for spatial arrangements in design processes for the built environment, future studies should examine attributes for the other three basic bodies - Octahedron, Cube, and Dodecahedron. Additionally, similarities and differences between the perceptions of two- and three-dimensional shapes as well as shapes that are more complex need further research.Keywords: 3D shapes, architecture, geometrical features, space perception, walk-in models
Procedia PDF Downloads 2284510 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification
Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine
Abstract:
Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.Keywords: convolution, feature extraction, image analysis, validation, precision agriculture
Procedia PDF Downloads 3164509 Training to Evaluate Creative Activity in a Training Context, Analysis of a Learner Evaluation Model
Authors: Massy Guillaume
Abstract:
Introduction: The implementation of creativity in educational policies or curricula raises several issues, including the evaluation of creativity and the means to do so. This doctoral research focuses on the appropriation and transposition of creativity assessment models by future teachers. Our objective is to identify the elements of the models that are most transferable to practice in order to improve their implementation in the students' curriculum while seeking to create a new model for assessing creativity in the school environment. Methods: In order to meet our objective, this preliminary quantitative exploratory study by questionnaire was conducted at two points in the participants' training: at the beginning of the training module and throughout the practical work. The population is composed of 40 people of diverse origins with an average age of 26 (s:8,623) years. In order to be as close as possible to our research objective and to test our questionnaires, we set up a pre-test phase during the spring semester of 2022. Results: The results presented focus on aspects of the OECD Creative Competencies Assessment Model. Overall, 72% of participants support the model's focus on skill levels as appropriate for the school context. More specifically, the data indicate that the separation of production and process in the rubric facilitates observation by the assessor. From the point of view of transposing the grid into teaching practice, the participants emphasised that production is easier to plan and observe in students than in the process. This difference is reinforced by a lack of knowledge about certain concepts such as innovation or risktaking in schools. Finally, the qualitative results indicate that the addition of multiple levels of competencies to the OECD rubric would allow for better implementation in the classroom. Conclusion: The identification by the students of the elements allowing the evaluation of creativity in the school environment generates an innovative approach to the training contents. These first data, from the test phase of our research, demonstrate the difficulty that exists between the implementation of an evaluation model in a training program and its potential transposition by future teachers.Keywords: creativity, evaluation, schooling, training
Procedia PDF Downloads 954508 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 124507 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 2944506 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data
Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery
Abstract:
Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.Keywords: El Sadat city, joint inversion, VES, TEM
Procedia PDF Downloads 3704505 Investigation of Aerodynamic and Design Features of Twisting Tall Buildings
Authors: Sinan Bilgen, Bekir Ozer Ay, Nilay Sezer Uzol
Abstract:
After decades of conventional shapes, irregular forms with complex geometries are getting more popular for form generation of tall buildings all over the world. This trend has recently brought out diverse building forms such as twisting tall buildings. This study investigates both the aerodynamic and design features of twisting tall buildings through comparative analyses. Since twisting a tall building give rise to additional complexities related with the form and structural system, lateral load effects become of greater importance on these buildings. The aim of this study is to analyze the inherent characteristics of these iconic forms by comparing the wind loads on twisting tall buildings with those on their prismatic twins. Through a case study research, aerodynamic analyses of an existing twisting tall building and its prismatic counterpart were performed and the results have been compared. The prismatic twin of the original building were generated by removing the progressive rotation of its floors with the same plan area and story height. Performance-based measures under investigation have been evaluated in conjunction with the architectural design. Aerodynamic effects have been analyzed by both wind tunnel tests and computational methods. High frequency base balance tests and pressure measurements on 3D models were performed to evaluate wind load effects on a global and local scale. Comparisons of flat and real surface models were conducted to further evaluate the effects of the twisting form without façade texture contribution. Comparisons highlighted that, the twisting form under investigation shows better aerodynamic behavior both for along wind but particularly for across wind direction. Compared to the prismatic counterpart; twisting model is superior on reducing vortex-shedding dynamic response by disorganizing the wind vortices. Consequently, despite the difficulties arisen from inherent complexity of twisted forms, they could still be feasible and viable with their attractive images in the realm of tall buildings.Keywords: aerodynamic tests, motivation for twisting, tall buildings, twisted forms, wind excitation
Procedia PDF Downloads 2344504 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 694503 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 734502 Studies on Non-Isothermal Crystallization Kinetics of PP/SEBS-g-MA Blends
Authors: Rishi Sharma, S. N. Maiti
Abstract:
The non-isothermal crystallization kinetics of PP/SEBS-g-MA blends up to 0-50% concentration of copolymer was studied by differential scanning calorimetry at four different cooling rates. Crystallization parameters were analyzed by Avrami and Jeziorny models. Primary and secondary crystallization processes were described by Avrami equation. Avrami model showed that all types of shapes grow from small dimensions during primary crystallization. However, three-dimensional crystal growth was observed during the secondary crystallization process. The crystallization peak and onset temperature decrease, howeverKeywords: crystallization kinetics, non-isothermal, polypropylene, SEBS-g-MA
Procedia PDF Downloads 6224501 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 1764500 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 1054499 Intergenerational Trauma: Patterns of Child Abuse and Neglect Across Two Generations in a Barbados Cohort
Authors: Rebecca S. Hock, Cyralene P. Bryce, Kevin Williams, Arielle G. Rabinowitz, Janina R. Galler
Abstract:
Background: Findings have been mixed regarding whether offspring of parents who were abused or neglected as children have a greater risk of experiencing abuse or neglect themselves. In addition, many studies on this topic are restricted to physical abuse and take place in a limited number of countries, representing a small segment of the world's population. Methods: We examined relationships between childhood maltreatment history assessed in a subset (N=68) of the original longitudinal birth cohort (G1) of the Barbados Nutrition Study and their now-adult offspring (G2) (N=111) using the Childhood Trauma Questionnaire-Short Form (CTQ-SF). We used Pearson correlations to assess relationships between parent and offspring CTQ-SF total and subscale scores (physical, emotional, and sexual abuse; physical and emotional neglect). Next, we ran multiple regression analyses, using the parental CTQ-SF total score and the parental Sexual Abuse score as primary predictors separately in our models of G2 CTQ-SF (total and subscale scores). Results: G1 total CTQ-SF scores were correlated with G2 offspring Emotional Neglect and total scores. G1 Sexual Abuse history was significantly correlated with G2 Emotional Abuse, Sexual Abuse, Emotional Neglect, and Total Score. In fully-adjusted regression models, parental (G1) total CTQ-SF scores remained significantly associated with G2 offspring reports of Emotional Neglect, and parental (G1) Sexual Abuse was associated with offspring (G2) reports of Emotional Abuse, Physical Abuse, Emotional Neglect, and overall CTQ-SF scores. Conclusions: Our findings support a link between parental exposure to childhood maltreatment and their offspring's self-reported exposure to childhood maltreatment. Of note, there was not an exact correspondence between the subcategory of maltreatment experienced from one generation to the next. Compared with other subcategories, G1 Sexual Abuse history was the most likely to predict G2 offspring maltreatment. Further studies are needed to delineate underlying mechanisms and to develop intervention strategies aimed at preventing intergenerational transmission.Keywords: trauma, family, adolescents, intergenerational trauma, child abuse, child neglect, global mental health, North America
Procedia PDF Downloads 844498 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 874497 Analysing Time Series for a Forecasting Model to the Dynamics of Aedes Aegypti Population Size
Authors: Flavia Cordeiro, Fabio Silva, Alvaro Eiras, Jose Luiz Acebal
Abstract:
Aedes aegypti is present in the tropical and subtropical regions of the world and is a vector of several diseases such as dengue fever, yellow fever, chikungunya, zika etc. The growth in the number of arboviruses cases in the last decades became a matter of great concern worldwide. Meteorological factors like mean temperature and precipitation are known to influence the infestation by the species through effects on physiology and ecology, altering the fecundity, mortality, lifespan, dispersion behaviour and abundance of the vector. Models able to describe the dynamics of the vector population size should then take into account the meteorological variables. The relationship between meteorological factors and the population dynamics of Ae. aegypti adult females are studied to provide a good set of predictors to model the dynamics of the mosquito population size. The time-series data of capture of adult females of a public health surveillance program from the city of Lavras, MG, Brazil had its association with precipitation, humidity and temperature analysed through a set of statistical methods for time series analysis commonly adopted in Signal Processing, Information Theory and Neuroscience. Cross-correlation, multicollinearity test and whitened cross-correlation were applied to determine in which time lags would occur the influence of meteorological variables on the dynamics of the mosquito abundance. Among the findings, the studied case indicated strong collinearity between humidity and precipitation, and precipitation was selected to form a pair of descriptors together with temperature. In the techniques used, there were observed significant associations between infestation indicators and both temperature and precipitation in short, mid and long terms, evincing that those variables should be considered in entomological models and as public health indicators. A descriptive model used to test the results exhibits a strong correlation to data.Keywords: Aedes aegypti, cross-correlation, multicollinearity, meteorological variables
Procedia PDF Downloads 1804496 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 3264495 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach
Authors: Yusuf Garba Baba
Abstract:
The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.Keywords: risk management, risk identification, risk analysis, analytic hierarchical process
Procedia PDF Downloads 118