Search results for: neural network models
2292 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish
Authors: Gintarė Sauliutė, Gintaras Svecevičius
Abstract:
Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model
Procedia PDF Downloads 2882291 Effect of Climate Change on Groundwater Recharge in a Sub-Humid Sub-Tropical Region of Eastern India
Authors: Suraj Jena, Rabindra Kumar Panda
Abstract:
The study region of the reported study was in Eastern India, having a sub-humid sub-tropical climate and sandy loam soil. The rainfall in this region has wide temporal and spatial variation. Due to lack of adequate surface water to meet the irrigation and household demands, groundwater is being over exploited in that region leading to continuous depletion of groundwater level. Therefore, there is an obvious urgency in reversing the depleting groundwater level through induced recharge, which becomes more critical under the climate change scenarios. The major goal of the reported study was to investigate the effects of climate change on groundwater recharge and subsequent adaptation strategies. Groundwater recharge was modelled using HELP3, a quasi-two-dimensional, deterministic, water-routing model along with global climate models (GCMs) and three global warming scenarios, to examine the changes in groundwater recharge rates for a 2030 climate under a variety of soil and vegetation covers. The relationship between the changing mean annual recharge and mean annual rainfall was evaluated for every combination of soil and vegetation using sensitivity analysis. The relationship was found to be statistically significant (p<0.05) with a coefficient of determination of 0.81. Vegetation dynamics and water-use affected by the increase in potential evapotranspiration for large climate variability scenario led to significant decrease in recharge from 49–658 mm to 18–179 mm respectively. Therefore, appropriate conjunctive use, irrigation schedule and enhanced recharge practices under the climate variability and land use/land cover change scenarios impacting the groundwater recharge needs to be understood properly for groundwater sustainability.Keywords: Groundwater recharge, climate variability, Land use/cover, GCM
Procedia PDF Downloads 2842290 The Metabolite Profiling of Fulvestrant-3 Boronic Acid under Biological Oxidation
Authors: Changde Zhang, Qiang Zhang, Shilong Zheng, Jiawang Liu, Shanchun Guo, Qiu Zhong, Guangdi Wang
Abstract:
Fulvestrant was approved by FDA to treat breast cancer as a selective estrogen receptor downregulator (SERD) with intramuscular injection administration. ZB716, a fulvestarnt-3 boronic acid, is an SERD with comparable anticancer effect to fulvestrant, but could produce good pharmacokinetic properties under oral administration with mice or rat models. To understand why ZB716 produced much better oral bioavailability, it was proposed that the boronic acid blocked the phase II direct biotransformation with the hydroxyl group on the 3 position of the aromatic ring on fulvestrant. In this study, ZB716 or fulvestrant was incubated with human liver microsome and oxidation cofactor NADPH in vitro. Their metabolites after oxidation were profiled with the Q-Exactive, a high-resolution mass spectrometer. The result showed that ZB716 blocked the forming of hydroxyl groups on its benzene ring except for the oxidation of C-B bond forming fulvestrant in its metabolites, and the concentration of fulvestrant with one more hydroxyl group found in the metabolites from incubation with fulvestrant was about 34 fold high as that formed from incubation with ZB716. Compared to fulvestrant, ZB716 is expected to be much difficult to be further bio-transformed into more hydrophilic compounds, to be difficult excreted out of blood system, and to have longer residence time in blood, which can lead to higher oral bioavailability. This study provided evidence to explain the high bioavailability of ZB716 after oral administration from the perspective of its difficulty of oxidation, a phase I biotransformation, on positions on its aromatic ring.Keywords: biotransformation, fulvestrant, metabolite profiling, ZB716
Procedia PDF Downloads 2612289 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention
Authors: Paula A. Cruise, Carvell McLeary
Abstract:
Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.Keywords: context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention
Procedia PDF Downloads 2502288 ExactData Smart Tool For Marketing Analysis
Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak
Abstract:
Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.Keywords: NLP, AI, IT, language, marketing, analysis
Procedia PDF Downloads 882287 Effect of Atrial Flutter on Alcoholic Cardiomyopathy
Authors: Ibrahim Ahmed, Richard Amoateng, Akhil Jain, Mohamed Ahmed
Abstract:
Alcoholic cardiomyopathy (ACM) is a type of acquired cardiomyopathy caused by chronic alcohol consumption. Frequently ACM is associated with arrhythmias such as atrial flutter. Our aim was to characterize the patient demographics and investigate the effect of atrial flutter (AF) on ACM. This was a retrospective cohort study using the Nationwide Inpatient Sample database to identify admissions in adults with principal and secondary diagnoses of alcoholic cardiomyopathy and atrial flutter from 2019. Multivariate linear and logistic regression models were adjusted for age, gender, race, household income, insurance status, Elixhauser comorbidity score, hospital location, bed size, and teaching status. The primary outcome was all-cause mortality, and secondary outcomes were the length of stay (LOS) and total charge in USD. There was a total of 21,855 admissions with alcoholic cardiomyopathy, of which 1,635 had atrial flutter (AF-ACM). Compared to Non-AF-ACM cohort, AF-ACM cohort had fewer females (4.89% vs 14.54%, p<0.001), were older (58.66 vs 56.13 years, p<0.001), fewer Native Americans (0.61% vs2.67%, p<0.01), had fewer smaller (19.27% vs 22.45%, p<0.01) & medium-sized hospitals (23.24% vs28.98%, p<0.01), but more large-sized hospitals (57.49% vs 48.57%, p<0.01), more Medicare (40.37% vs 34.08%, p<0.05) and fewer Medicaid insured (23.55% vs 33.70%, p=<0.001), fewer hypertension (10.7% vs 15.01%, p<0.05), and more obesity (24.77% vs 16.35%, p<0.001). Compared to Non-AF-ACM cohort, there was no difference in AF-ACM cohort mortality rate (6.13% vs 4.20%, p=0.0998), unadjusted mortality OR 1.49 (95% CI 0.92-2.40, p=0.102), adjusted mortality OR 1.36 (95% CI 0.83-2.24, p=0.221), but there was a difference in LOS 1.23 days (95% CI 0.34-2.13, p<0.01), total charge $28,860.30 (95% CI 11,883.96-45,836.60, p<0.01). In patients admitted with ACM, the presence of AF was not associated with a higher all-cause mortality rate or odds of all-cause mortality; however, it was associated with 1.23 days increase in LOS and a $28,860.30 increase in total hospitalization charge. Native Americans, older age and obesity were risk factors for the presence of AF in ACM.Keywords: alcoholic cardiomyopathy, atrial flutter, cardiomyopathy, arrhythmia
Procedia PDF Downloads 1132286 Roundabout Implementation Analyses Based on Traffic Microsimulation Model
Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković
Abstract:
Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.Keywords: before-after analysis, operating speed, capacity, design.
Procedia PDF Downloads 252285 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 1162284 An Image Processing Scheme for Skin Fungal Disease Identification
Authors: A. A. M. A. S. S. Perera, L. A. Ranasinghe, T. K. H. Nimeshika, D. M. Dhanushka Dissanayake, Namalie Walgampaya
Abstract:
Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases.Keywords: Circularity Index, Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix, Local Binary Pattern, Object detection, Ring Detection, Shape Identification
Procedia PDF Downloads 2332283 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals
Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao
Abstract:
Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).Keywords: nanocomposites, polymers, EMI shielding, thermal imaging
Procedia PDF Downloads 1082282 Interaction Between Gut Microorganisms and Endocrine Disruptors - Effects on Hyperglycaemia
Authors: Karthika Durairaj, Buvaneswari G., Gowdham M., Gilles M., Velmurugan G.
Abstract:
Background: Hyperglycaemia is the primary cause of metabolic illness. Recently, researchers focused on the possibility that chemical exposure could promote metabolic disease. Hyperglycaemia causes a variety of metabolic diseases dependent on its etiologic conditions. According to animal and population-based research, individual chemical exposure causes health problems through alteration of endocrine function with the influence of microbial influence. We were intrigued by the function of gut microbiota variation in high fat and chemically induced hyperglycaemia. Methodology: C57/Bl6 mice were subjected to two different treatments to generate the etiologic-based diabetes model: I – a high-fat diet with a 45 kcal diet, and II - endocrine disrupting chemicals (EDCs) cocktail. The mice were monitored periodically for changes in body weight and fasting glucose. After 120 days of the experiment, blood anthropometry, faecal metagenomics and metabolomics were performed and analyzed through statistical analysis using one-way ANOVA and student’s t-test. Results: After 120 days of exposure, we found hyperglycaemic changes in both experimental models. The treatment groups also differed in terms of plasma lipid levels, creatinine, and hepatic markers. To determine the influence on glucose metabolism, microbial profiling and metabolite levels were significantly different between groups. The gene expression studies associated with glucose metabolism vary between hosts and their treatments. Conclusion: This research will result in the identification of biomarkers and molecular targets for better diabetes control and treatment.Keywords: hyperglycaemia, endocrine-disrupting chemicals, gut microbiota, host metabolism
Procedia PDF Downloads 472281 Evaluation of Surface Roughness Condition Using App Roadroid
Authors: Diego de Almeida Pereira
Abstract:
The roughness index of a road is considered the most important parameter about the quality of the pavement, as it has a close relation with the comfort and safety of the road users. Such condition can be established by means of functional evaluation of pavement surface deviations, measured by the International Roughness Index (IRI), an index that came out of the international evaluation of pavements, coordinated by the World Bank, and currently owns, as an index of limit measure, for purposes of receiving roads in Brazil, the value of 2.7 m/km. This work make use of the e.IRI parameter, obtained by the Roadroid app. for smartphones which use Android operating system. The choice of such application is due to the practicality for the user interaction, as it possesses a data storage on a cloud of its own, and the support given to universities all around the world. Data has been collected for six months, once in each month. The studies begun in March 2018, season of precipitations that worsen the conditions of the roads, besides the opportunity to accompany the damage and the quality of the interventions performed. About 350 kilometers of sections of four federal highways were analyzed, BR-020, BR-040, BR-060 and BR-070 that connect the Federal District (area where Brasilia is located) and surroundings, chosen for their economic and tourist importance, been two of them of federal and two others of private exploitation. As well as much of the road network, the analyzed stretches are coated of Hot Mix Asphalt (HMA). Thus, this present research performs a contrastive discussion between comfort conditions and safety of the roads under private exploitation in which users pay a fee to the concessionaires so they could travel on a road that meet the minimum requirements for usage, and regarding the quality of offered service on the roads under Federal Government jurisdiction. And finally, the contrast of data collected by National Department of Transport Infrastructure – DNIT, by means of a laser perfilometer, with data achieved by Roadroid, checking the applicability, the practicality and cost-effective, considering the app limitations.Keywords: roadroid, international roughness index, Brazilian roads, pavement
Procedia PDF Downloads 872280 Advancements in Laser Welding Process: A Comprehensive Model for Predictive Geometrical, Metallurgical, and Mechanical Characteristics
Authors: Seyedeh Fatemeh Nabavi, Hamid Dalir, Anooshiravan Farshidianfar
Abstract:
Laser welding is pivotal in modern manufacturing, offering unmatched precision, speed, and efficiency. Its versatility in minimizing heat-affected zones, seamlessly joining dissimilar materials, and working with various metals makes it indispensable for crafting intricate automotive components. Integration into automated systems ensures consistent delivery of high-quality welds, thereby enhancing overall production efficiency. Noteworthy are the safety benefits of laser welding, including reduced fumes and consumable materials, which align with industry standards and environmental sustainability goals. As the automotive sector increasingly demands advanced materials and stringent safety and quality standards, laser welding emerges as a cornerstone technology. A comprehensive model encompassing thermal dynamic and characteristics models accurately predicts geometrical, metallurgical, and mechanical aspects of the laser beam welding process. Notably, Model 2 showcases exceptional accuracy, achieving remarkably low error rates in predicting primary and secondary dendrite arm spacing (PDAS and SDAS). These findings underscore the model's reliability and effectiveness, providing invaluable insights and predictive capabilities crucial for optimizing welding processes and ensuring superior productivity, efficiency, and quality in the automotive industry.Keywords: laser welding process, geometrical characteristics, mechanical characteristics, metallurgical characteristics, comprehensive model, thermal dynamic
Procedia PDF Downloads 512279 The Effect of Fibre Orientation on the Mechanical Behaviour of Skeletal Muscle: A Finite Element Study
Authors: Christobel Gondwe, Yongtao Lu, Claudia Mazzà, Xinshan Li
Abstract:
Skeletal muscle plays an important role in the human body system and function by generating voluntary forces and facilitating body motion. However, The mechanical properties and behaviour of skeletal muscle are still not comprehensively known yet. As such, various robust engineering techniques have been applied to better elucidate the mechanical behaviour of skeletal muscle. It is considered that muscle mechanics are highly governed by the architecture of the fibre orientations. Therefore, the aim of this study was to investigate the effect of different fibre orientations on the mechanical behaviour of skeletal muscle.In this study, a continuum mechanics approach–finite element (FE) analysis was applied to the left bicep femoris long head to determine the contractile mechanism of the muscle using Hill’s three-element model. The geometry of the muscle was segmented from the magnetic resonance images. The muscle was modelled as a quasi-incompressible hyperelastic (Mooney-Rivlin) material. Two types of fibre orientations were implemented: one with the idealised fibre arrangement, i.e. parallel single-direction fibres going from the muscle origin to insertion sites, and the other with curved fibre arrangement which is aligned with the muscle shape.The second fibre arrangement was implemented through the finite element method; non-uniform rational B-spline (FEM-NURBs) technique by means of user material (UMAT) subroutines. The stress-strain behaviour of the muscle was investigated under idealised exercise conditions, and will be further analysed under physiological conditions. The results of the two different FE models have been outputted and qualitatively compared.Keywords: FEM-NURBS, finite element analysis, Mooney-Rivlin hyperelastic, muscle architecture
Procedia PDF Downloads 4812278 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps
Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li
Abstract:
With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.Keywords: mobile computing, deep learning apps, sensitive information, static analysis
Procedia PDF Downloads 1812277 The Potential Role of Some Nutrients and Drugs in Providing Protection from Neurotoxicity Induced by Aluminium in Rats
Authors: Azza A. Ali, Abeer I. Abd El-Fattah, Shaimaa S. Hussein, Hanan A. Abd El-Samea, Karema Abu-Elfotuh
Abstract:
Background: Aluminium (Al) represents an environmental risk factor. Exposure to high levels of Al causes neurotoxic effects and different diseases. Vinpocetine is widely used to improve cognitive functions, it possesses memory-protective and memory-enhancing properties and has the ability to increase cerebral blood flow and glucose uptake. Cocoa bean represents a rich source of iron as well as a potent antioxidant. It can protect from the impact of free radicals, reduces stress as well as depression and promotes better memory and concentration. Wheatgrass is primarily used as a concentrated source of nutrients. It contains vitamins, minerals, carbohydrates, amino acids and possesses antioxidant and anti-inflammatory activities. Coenzyme Q10 (CoQ10) is an intracellular antioxidant and mitochondrial membrane stabilizer. It is effective in improving cognitive disorders and has been used as anti-aging. Zinc is a structural element of many proteins and signaling messenger that is released by neural activity at many central excitatory synapses. Objective: To study the role of some nutrients and drugs as Vinpocetine, Cocoa, Wheatgrass, CoQ10 and Zinc against neurotoxicity induced by Al in rats as well as to compare between their potency in providing protection. Methods: Seven groups of rats were used and received daily for three weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except for the control group which received saline. All groups of Al-toxicity model except one group (non-treated) were co-administered orally together with AlCl3 the following treatments; Vinpocetine (20mg/kg), Cocoa powder (24mg/kg), Wheat grass (100mg/kg), CoQ10 (200mg/kg) or Zinc (32mg/kg). Biochemical changes in the rat brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups besides histopathological examinations in different brain regions. Results: Neurotoxicity and neurodegenerations in the rat brain after three weeks of Al exposure were indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the significant decrease in SOD, TAC, BDNF and confirmed by the histopathological changes in the brain. On the other hand, co-administration of each of Vinpocetine, Cocoa, Wheatgrass, CoQ10 or Zinc together with AlCl3 provided protection against hazards of neurotoxicity and neurodegenerations induced by Al, their protection were indicated by the decrease in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the increase in SOD, TAC, BDNF and confirmed by the histopathological examinations of different brain regions. Vinpocetine and Cocoa showed the most pronounced protection while Zinc provided the least protective effects than the other used nutrients and drugs. Conclusion: Different degrees of protection from neurotoxicity and neuronal degenerations induced by Al could be achieved through the co-administration of some nutrients and drugs during its exposure. Vinpocetine and Cocoa provided the most protection than Wheat grass, CoQ10 or Zinc which showed the least protective effects.Keywords: aluminum, neurotoxicity, vinpocetine, cocoa, wheat grass, coenzyme Q10, Zinc, rats
Procedia PDF Downloads 2512276 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana
Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet
Abstract:
The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems
Procedia PDF Downloads 882275 A Critical Reflection of Ableist Methodologies: Approaching Interviews and Go-Along Interviews
Authors: Hana Porkertová, Pavel Doboš
Abstract:
Based on a research project studying the experience of visually disabled people with urban space in the Czech Republic, the conference contribution discusses the limits of social-science methodologies used in sociology and human geography. It draws on actor-network theory, assuming that science does not describe reality but produces it. Methodology connects theory, research questions, ways to answer them (methods), and results. A research design utilizing ableist methodologies can produce ableist realities. Therefore, it was necessary to adjust the methods so that they could mediate blind experience to the scientific community without reproducing ableism. The researchers faced multiple challenges, ranging from questionable validity to how to research experience that differs from that of the researchers who are able-bodied. Finding a suitable theory that could be used as an analytical tool that would demonstrate space and blind experience as multiple, dynamic, and mutually constructed was the first step that could offer a range of potentially productive methods and research questions, as well as bring critically reflected results. Poststructural theory, mainly Deleuze-Guattarian philosophy, was chosen, and two methods were used: interviews and go-along interviews that had to be adjusted to be able to explore blind experience. In spite of a thorough preparation of these methods, new difficulties kept emerging, which exposed the ableist character of scientific knowledge. From the beginning of data collecting, there was an agreement to work in teams with slightly different roles of each of the researchers, which was significant especially during go-along interviews. In some cases, the anticipations of the researchers and participants differed, which led to unexpected and potentially dangerous situations. These were not caused only by the differences between scientific and lay communities but also between able-bodied and disabled people. Researchers were sometimes assigned to the assistants’ roles, and this new position – doing research together – required further negotiations, which also opened various ethical questions.Keywords: ableist methodology, blind experience, go-along interviews, research ethics, scientific knowledge
Procedia PDF Downloads 1662274 Leadership in the Era of AI: Growing Organizational Intelligence
Authors: Mark Salisbury
Abstract:
The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker
Procedia PDF Downloads 462273 Non−zero θ_13 and δ_CP phase with A_4 Flavor Symmetry and Deviations to Tri−Bi−Maximal mixing via Z_2 × Z_2 invariant perturbations in the Neutrino sector.
Authors: Gayatri Ghosh
Abstract:
In this work, a flavour theory of a neutrino mass model based on A_4 symmetry is considered to explain the phenomenology of neutrino mixing. The spontaneous symmetry breaking of A_4 symmetry in this model leads to tribimaximal mixing in the neutrino sector at a leading order. We consider the effect of Z_2 × Z_2 invariant perturbations in neutrino sector and find the allowed region of correction terms in the perturbation matrix that is consistent with 3σ ranges of the experimental values of the mixing angles. We study the entanglement of this formalism on the other phenomenological observables, such as δ_CP phase, the neutrino oscillation probability P(νµ → νe), the effective Majorana mass |mee| and |meff νe |. A Z_2 × Z_2 invariant perturbations in this model is introduced in the neutrino sector which leads to testable predictions of θ_13 and CP violation. By changing the magnitudes of perturbations in neutrino sector, one can generate viable values of δ_CP and neutrino oscillation parameters. Next we investigate the feasibility of charged lepton flavour violation in type-I seesaw models with leptonic flavour symmetries at high energy that leads to tribimaximal neutrino mixing. We consider an effective theory with an A_4 × Z_2 × Z_2 symmetry, which after spontaneous symmetry breaking at high scale which is much higher than the electroweak scale leads to charged lepton flavour violation processes once the heavy Majorana neutrino mass degeneracy is lifted either by renormalization group effects or by a soft breaking of the A_4 symmetry. In this context the implications for charged lepton flavour violation processes like µ → eγ, τ → eγ, τ → µγ are discussed.Keywords: Z2 × Z2 invariant perturbations, CLFV, delta CP phase, tribimaximal neutrino mixing
Procedia PDF Downloads 802272 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 3562271 Photoelastic Analysis and Finite Elements Analysis of a Stress Field Developed in a Double Edge Notched Specimen
Authors: A. Bilek, M. Beldi, T. Cherfi, S. Djebali, S. Larbi
Abstract:
Finite elements analysis and photoelasticity are used to determine the stress field developed in a double edge notched specimen loaded in tension. The specimen is cut in a birefringent plate. Experimental isochromatic fringes are obtained with circularly polarized light on the analyzer of a regular polariscope. The fringes represent the loci of points of equal maximum shear stress. In order to obtain the stress values corresponding to the fringe orders recorded in the notched specimen, particularly in the neighborhood of the notches, a calibrating disc made of the same material is loaded in compression along its diameter in order to determine the photoelastic fringe value. This fringe value is also used in the finite elements solution in order to obtain the simulated photoelastic fringes, the isochromatics as well as the isoclinics. A color scale is used by the software to represent the simulated fringes on the whole model. The stress concentration factor can be readily obtained at the notches. Good agreements are obtained between the experimental and the simulated fringe patterns and between the graphs of the shear stress particularly in the neighborhood of the notches. The purpose in this paper is to show that one can obtain rapidly and accurately, by the finite element analysis, the isochromatic and the isoclinic fringe patterns in a stressed model as the experimental procedure can be time consuming. Stress fields can therefore be analyzed in three dimensional models as long as the meshing and the limit conditions are properly set in the program.Keywords: isochromatic fringe, isoclinic fringe, photoelasticity, stress concentration factor
Procedia PDF Downloads 2302270 Neurosciences in Entrepreneurship: The Multitasking Case in Favor of Social Entrepreneurship Innovation
Authors: Berger Aida
Abstract:
Social entrepreneurship has emerged as an active area of practice and research within the last three decades and has called for a focus on Social Entrepreneurship innovation. Areas such as academics, practitioners , institutions or governments have placed Social Entrepreneurship on the priority list of reflexion and action. It has been accepted that Social entrepreneurship (SE) shares large similarities with its parent, Traditional Entrepreneurship (TE). SE has grown over the past ten years exploring entrepreneurial cognition and the analysis of the ways of thinking of entrepreneurs. The research community believes that value exists in grounding entrepreneurship in neuroscience and notes that SE, like Traditional Entrepreneurship, needs to undergo efforts in clarification, definition and differentiation. Moreover, gaps in SE research call for integrative multistage and multilevel framework for further research. The cognitive processes underpinning entrepreneurial action are similar for SE and TE even if Social Entrepreneurship orientation shows an increased empathy value. Theoretically, there is a need to develop sound models of how to process functions and how to work more effectively as entrepreneurs and research on efficiency improvement calls for the analysis of the most common practices in entrepreneurship. Multitasking has been recognized as a daily and unavoidable habit of entrepreneurs. Hence, we believe in the need of analyzing the multiple task phenomena as a methodology for skill acquisition. We will conduct our paper including Social Entrepreneurship within the wider spectrum of Traditional Entrepreneurship, for the purpose of simplifying the neuroscientific lecture of the entrepreneurial cognition. A question to be inquired is to know if there is a way of developing multitasking habits in order to improve entrepreneurial skills such as speed of information processing , creativity and adaptability . Nevertheless, the direct link between the neuroscientific approach to multitasking and entrepreneurship effectiveness is yet to be uncovered. That is why an extensive Literature Review on Multitasking is a propos.Keywords: cognitive, entrepreneurial, empathy, multitasking
Procedia PDF Downloads 1742269 Developing a Spatial Transport Model to Determine Optimal Routes When Delivering Unprocessed Milk
Authors: Sunday Nanosi Ndovi, Patrick Albert Chikumba
Abstract:
In Malawi, smallholder dairy farmers transport unprocessed milk to sell at Milk Bulking Groups (MBGs). MBGs store and chill the milk while awaiting collection by processors. The farmers deliver milk using various modes of transportation such as foot, bicycle, and motorcycle. As a perishable food, milk requires timely transportation to avoid deterioration. In other instances, some farmers bypass the nearest MBGs for facilities located further away. Untimely delivery worsens quality and results in rejection at MBG. Subsequently, these rejections lead to revenue losses for dairy farmers. Therefore, the objective of this study was to optimize routes when transporting milk by selecting the shortest route using time as a cost attribute in Geographic Information Systems (GIS). A spatially organized transport system impedes milk deterioration while promoting profitability for dairy farmers. A transportation system was modeled using Route Analysis and Closest Facility network extensions. The final output was to find the quickest routes and identify the nearest milk facilities from incidents. Face-to-face interviews targeted leaders from all 48 MBGs in the study area and 50 farmers from Namahoya MBG. During field interviews, coordinates were captured in order to create maps. Subsequently, maps supported the selection of optimal routes based on the least travel times. The questionnaire targeted 200 respondents. Out of the total, 182 respondents were available. Findings showed that out of the 50 sampled farmers that supplied milk to Namahoya, only 8% were nearest to the facility, while 92% were closest to 9 different MBGs. Delivering milk to the nearest MBGs would minimize travel time and distance by 14.67 hours and 73.37 km, respectively.Keywords: closest facility, milk, route analysis, spatial transport
Procedia PDF Downloads 592268 Inferring the Ecological Quality of Seagrass Beds from Using Composition and Configuration Indices
Authors: Fabrice Houngnandan, Celia Fery, Thomas Bockel, Julie Deter
Abstract:
Getting water cleaner and stopping global biodiversity loss requires indices to measure changes and evaluate the achievement of objectives. The endemic and protected seagrass species Posidonia oceanica is a biological indicator used to monitor the ecological quality of marine Mediterranean waters. One ecosystem index (EBQI), two biotic indices (PREI, Bipo), and several landscape indices, which measure the composition and configuration of the P. oceanica seagrass at the population scale have been developed. While the formers are measured at monitoring sites, the landscape indices can be calculated for the entire seabed covered by this ecosystem. This present work aims to search on the link between these indices and the best scale to be used in order to maximize this link. We used data collected between 2014 to 2019 along the French Mediterranean coastline to calculate EBQI, PREI, and Bipo at 100 sites. From the P. oceanica seagrass distribution map, configuration and composition indices around these different sites in 6 different grid sizes (100 m x 100 to 1000 m x 1000 m) were determined. Correlation analyses were first used to find out the grid size presenting the strongest and most significant link between the different types of indices. Finally, several models were compared basis on various metrics to identify the one that best explains the nature of the link between these indices. Our results showed a strong and significant link between biotic indices and the best correlations between biotic and landscape indices within the 600 m x 600 m grid cells. These results showed that the use of landscape indices is possible to monitor the health of seagrass beds at a large scale.Keywords: ecological indicators, decline, conservation, submerged aquatic vegetation
Procedia PDF Downloads 1342267 Digital Media Use and Access among Rural Youth in South Africa: The Prospects for Female Empowerment
Authors: Fulufhelo Oscar Makananise
Abstract:
Digital technologies have played a significant role in bridging the information gap between the haves and the have nots in society. In developing countries such as South Africa, historically marginalised groups such as women in rural communities have an opportunity to use digital technologies to network among themselves as well as interact with their government, thereby enhancing prospects for poverty eradication, political participation, community development and democracy. However, the extent to which these goals can be achieved in a developing context through harnessing digital technologies is not quite clear, particularly given the fact that access to these technologies is not evenly distributed and the fact that women’s access to digital technologies is hampered by factors that go beyond the question of infrastructure. Informed by the technological dependency theory, this paper is about how female youth in rural South Africa are deploying digital media tools for socio-economic empowerment. In particular, the study investigated the extent to which female youth in Limpopo province, South Africa access and use digital media platforms and gadgets and the extent to which those technologies are breaking down barriers that stand in the way of female youth empowerment. Data were gathered using a self-administered questionnaire disseminated to selected 100 female youth in Limpopo Province, South Africa. The data were analysed using SPSS version 9, and the results were analysed using descriptive statistics. The paper argues that wider and constant access to digital media by female youth in rural areas is indicative of the great potential for empowering female youth in rural areas through harnessing digital media. The study established that the majority of female youth had access to digital media technologies and used them to share valuable information among themselves. The study further established that female youth are active users of digital media in South Africa, which is the significant driver for socio-economic empowerment.Keywords: digital technologies, empowerment, female youth, South Africa, survey, technological dependency
Procedia PDF Downloads 1332266 Effect of Surface Treatments on the Cohesive Response of Nylon 6/silica Interfaces
Authors: S. Arabnejad, D. W. C. Cheong, H. Chaobin, V. P. W. Shim
Abstract:
Debonding is the one of the fundamental damage mechanisms in particle field composites. This phenomenon gains more importance in nano composites because of the extensive interfacial region present in these materials. Understanding the debonding mechanism accurately, can help in understanding and predicting the response of nano composites as the interface deteriorates. The small length scale of the phenomenon makes the experimental characterization complicated and the results of it, far from real physical behavior. In this study the damage process in nylon-6/silica interface is examined through Molecular Dynamics (MD) modeling and simulations. The silica has been modeled with three forms of surfaces – without any surface treatment, with the surface treatment of 3-aminopropyltriethoxysilane (APTES) and with Hexamethyldisilazane (HMDZ) surface treatment. The APTES surface modification used to create functional groups on the silica surface, reacts and form covalent bonds with nylon 6 chains while the HMDZ surface treatment only interacts with both particle and polymer by non-bond interaction. The MD model in this study uses a PCFF force field. The atomic model is generated in a periodic box with a layer of vacuum on top of the polymer layer. This layer of vacuum is large enough that assures us from not having any interaction between particle and substrate after debonding. Results show that each of these three models show a different traction separation behavior. However, all of them show an almost bilinear traction separation behavior. The study also reveals a strong correlation between the length of APTES surface treatment and the cohesive strength of the interface.Keywords: debonding, surface treatment, cohesive response, separation behaviour
Procedia PDF Downloads 4602265 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures
Authors: Diyar Yousif Ali
Abstract:
Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing
Procedia PDF Downloads 902264 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 2332263 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 264