Search results for: adaptive%20fuzzy%20PI%20controller
106 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder
Authors: Jessica Scher Lisa, Eric Shyman
Abstract:
Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.Keywords: autism spectrum disorder, neurology, visual processing, weak coherence
Procedia PDF Downloads 126105 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 151104 Increment of Panel Flutter Margin Using Adaptive Stiffeners
Authors: S. Raja, K. M. Parammasivam, V. Aghilesh
Abstract:
Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape
Procedia PDF Downloads 291103 Management and Genetic Characterization of Local Sheep Breeds for Better Productive and Adaptive Traits
Authors: Sonia Bedhiaf-Romdhani
Abstract:
The sheep (Ovis aries) was domesticated, approximately 11,000 years ago (YBP), in the Fertile Crescent from Asian Mouflon (Ovis Orientalis). The Northern African (NA) sheep is 7,000 years old, represents a remarkable diversity of sheep populations reared under traditional and low input farming systems (LIFS) over millennia. The majority of small ruminants in developing countries are encountered in low input production systems and the resilience of local communities in rural areas is often linked to the wellbeing of small ruminants. Regardless of the rich biodiversity encountered in sheep ecotypes there are four main sheep breeds in the country with 61,6 and 35.4 percents of Barbarine (fat tail breed) and Queue Fine de l’Ouest (thin tail breed), respectively. Phoenicians introduced the Barbarine sheep from the steppes of Central Asia in the Carthaginian period, 3000 years ago. The Queue Fine de l’Ouest is a thin-tailed meat breed heavily concentrated in the Western and the central semi-arid regions. The Noire de Thibar breed, involving mutton-fine wool producing animals, has been on the verge of extinction, it’s a composite black coated sheep breed found in the northern sub-humid region because of its higher nutritional requirements and non-tolerance of the prevailing harsher condition. The D'Man breed, originated from Morocco, is mainly located in the southern oases of the extreme arid ecosystem. A genetic investigation of Tunisian sheep breeds using a genome-wide scan of approximately 50,000 SNPs was performed. Genetic analysis of relationship between breeds highlighted the genetic differentiation of Noire de Thibar breed from the other local breeds, reflecting the effect of past events of introgression of European gene pool. The Queue Fine de l’Ouest breed showed a genetic heterogeneity and was close to Barbarine. The D'Man breed shared a considerable gene flow with the thin-tailed Queue Fine de l'Ouest breed. Native small ruminants breeds, are capable to be efficiently productive if essential ingredients and coherent breeding schemes are implemented and followed. Assessing the status of genetic variability of native sheep breeds could provide important clues for research and policy makers to devise better strategies for the conservation and management of genetic resources.Keywords: sheep, farming systems, diversity, SNPs.
Procedia PDF Downloads 145102 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 49101 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 176100 Isolation of Nitrosoguanidine Induced NaCl Tolerant Mutant of Spirulina platensis with Improved Growth and Phycocyanin Production
Authors: Apurva Gupta, Surendra Singh
Abstract:
Spirulina spp., as a promising source of many commercially valuable products, is grown photo autotrophically in open ponds and raceways on a large scale. However, the economic exploitation in an open system seems to have been limited because of lack of multiple stress-tolerant strains. The present study aims to isolate a stable stress tolerant mutant of Spirulina platensis with improved growth rate and enhanced potential to produce its commercially valuable bioactive compounds. N-methyl-n'-nitro-n-nitrosoguanidine (NTG) at 250 μg/mL (concentration permitted 1% survival) was employed for chemical mutagenesis to generate random mutants and screened against NaCl. In a preliminary experiment, wild type S. platensis was treated with NaCl concentrations from 0.5-1.5 M to calculate its LC₅₀. Mutagenized colonies were then screened for tolerance at 0.8 M NaCl (LC₅₀), and the surviving colonies were designated as NaCl tolerant mutants of S. platensis. The mutant cells exhibited 1.5 times improved growth against NaCl stress as compared to the wild type strain in control conditions. This might be due to the ability of the mutant cells to protect its metabolic machinery against inhibitory effects of salt stress. Salt stress is known to adversely affect the rate of photosynthesis in cyanobacteria by causing degradation of the pigments. Interestingly, the mutant cells were able to protect its photosynthetic machinery and exhibited 4.23 and 1.72 times enhanced accumulation of Chl a and phycobiliproteins, respectively, which resulted in enhanced rate of photosynthesis (2.43 times) and respiration (1.38 times) against salt stress. Phycocyanin production in mutant cells was observed to enhance by 1.63 fold. Nitrogen metabolism plays a vital role in conferring halotolerance to cyanobacterial cells by influx of nitrate and efflux of Na+ ions from the cell. The NaCl tolerant mutant cells took up 2.29 times more nitrate as compared to the wild type and efficiently reduce it. Nitrate reductase and nitrite reductase activity in the mutant cells also improved by 2.45 and 2.31 times, respectively against salt stress. From these preliminary results, it could be deduced that enhanced nitrogen uptake and its efficient reduction might be a reason for adaptive and halotolerant behavior of the S. platensis mutant cells. Also, the NaCl tolerant mutant of S. platensis with significant improved growth and phycocyanin accumulation compared to the wild type can be commercially promising.Keywords: chemical mutagenesis, NaCl tolerant mutant, nitrogen metabolism, photosynthetic machinery, phycocyanin
Procedia PDF Downloads 16799 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review
Authors: Qiyao Han, Xianhai Meng
Abstract:
Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.Keywords: fractal, urban infrastructure, sustainability, system-level resilience
Procedia PDF Downloads 27398 Investigating Reading Comprehension Proficiency and Self-Efficacy among Algerian EFL Students within Collaborative Strategic Reading Approach and Attributional Feedback Intervention
Authors: Nezha Badi
Abstract:
It has been shown in the literature that Algerian university students suffer from low levels of reading comprehension proficiency, which hinder their overall proficiency in English. This low level is mainly related to the methodology of teaching reading which is employed by the teacher in the classroom (a teacher-centered environment), as well as students’ poor sense of self-efficacy to undertake reading comprehension activities. Arguably, what is needed is an approach necessary for enhancing students’ self-beliefs about their abilities to deal with different reading comprehension activities. This can be done by providing them with opportunities to take responsibility for their own learning (learners’ autonomy). As a result of learning autonomy, learners’ beliefs about their abilities to deal with certain language tasks may increase, and hence, their language learning ability. Therefore, this experimental research study attempts to assess the extent to which an integrated approach combining one particular reading approach known as ‘collaborative strategic reading’ (CSR), and teacher’s attributional feedback (on students’ reading performance and strategy use) can improve the reading comprehension skill and the sense of self-efficacy of EFL Algerian university students. It also seeks to examine students’ main reasons for their successful or unsuccessful achievements in reading comprehension activities, and whether students’ attributions for their reading comprehension outcomes can be modified after exposure to the instruction. To obtain the data, different tools including a reading comprehension test, questionnaires, an observation, an interview, and learning logs were used with 105 second year Algerian EFL university students. The sample of the study was divided into three groups; one control group (with no treatment), one experimental group (CSR group) who received a CSR instruction, and a second intervention group (CSR Plus group) who received teacher’s attribution feedback in addition to the CSR intervention. Students in the CSR Plus group received the same experiment as the CSR group using the same tools, except that they were asked to keep learning logs, for which teacher’s feedback on reading performance and strategy use was provided. The results of this study indicate that the CSR and the attributional feedback intervention was effective in improving students’ reading comprehension proficiency and sense of self-efficacy. However, there was not a significant change in students’ adaptive and maladaptive attributions for their success and failure d from the pre-test to the post-test phase. Analysis of the perception questionnaire, the interview, and the learning logs shows that students have positive perceptions about the CSR and the attributional feedback instruction. Based on the findings, this study, therefore, seeks to provide EFL teachers in general and Algerian EFL university teachers in particular with pedagogical implications on how to teach reading comprehension to their students to help them achieve well and feel more self-efficacious in reading comprehension activities, and in English language learning more generally.Keywords: attributions, attributional feedback, collaborative strategic reading, self-efficacy
Procedia PDF Downloads 11897 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City
Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.
Abstract:
Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy
Procedia PDF Downloads 6496 Ambivilance, Denial, and Adaptive Responses to Vulnerable Suspects in Police Custody: The New Limits of the Sovereign State
Authors: Faye Cosgrove, Donna Peacock
Abstract:
This paper examines current state strategies for dealing with vulnerable people in police custody and identifies the underpinning discourses and practices which inform these strategies. It has previously been argued that the state has utilised contradictory and conflicting responses to the control of crime, by employing opposing strategies of denial and adaptation in order to simultaneously both display sovereignty and disclaim responsibility. This paper argues that these contradictory strategies are still being employed in contemporary criminal justice, although the focus and the purpose have now shifted. The focus is upon the ‘vulnerable’ suspect, whose social identity is as incongruous, complex and contradictory as his social environment, and the purpose is to redirect attention away from negative state practices, whilst simultaneously displaying a compassionate and benevolent countenance in order to appeal to the voting public. The findings presented here result from intensive qualitative research with police officers, with health care professionals, and with civilian volunteers who work within police custodial environments. The data has been gathered over a three-year period and includes observational and interview data which has been thematically analysed to expose the underpinning mechanisms from which the properties of the system emerge. What is revealed is evidence of contemporary state practices of denial relating to the harms of austerity and the structural relations of vulnerability, whilst simultaneously adapting through processes of ‘othering’ of the vulnerable, ‘responsibilisation’ of citizens, defining deviance down through diversionary practices, and managing success through redefining the aims of the system. The ‘vulnerable’ suspect is subject to individual pathologising, and yet the nature of risk is aggregated. ‘Vulnerable’ suspects are supported in police custody by private citizens, by multi-agency partnerships, and by for-profit organisations, while the state seeks to collate and control services, and thereby to retain a veneer of control. Late modern ambivalence to crime control and the associated contradictory practices of abjuration and adjustment have extended to state responses to vulnerable suspects. The support available in the custody environment operates to control and minimise operational and procedural risk, rather than for the welfare of the detained person, and in fact, the support available is discovered to be detrimental to the very people that it claims to benefit. The ‘vulnerable’ suspect is now subject to the bifurcated logics employed at the new limits of the sovereign state.Keywords: custody, policing, sovereign state, vulnerability
Procedia PDF Downloads 16895 Functional Dimension of Reuse: Use of Antalya Kaleiçi Traditional Dwellings as Hotel
Authors: Dicle Aydın, Süheyla Büyükşahin Sıramkaya
Abstract:
Conservation concept gained importance especially in 19th century, it found value with the change and developments lived globally. Basic values in the essence of the concept are important in the continuity of historical and cultural fabrics which have character special to them. Reuse of settlements and spaces carrying historical and cultural values in the frame of socio-cultural and socio-economic conditions is related with functional value. Functional dimension of reuse signifies interrogation of the usage potential of the building with a different aim other than its determined aim. If a building carrying historical and cultural values cannot be used with its own function because of environmental, economical, structural and functional reasons, it is advantageous to maintain its reuse from the point of environmental ecology. By giving a new function both a requirement of the society is fulfilled and a culture entity is conserved because of its functional value. In this study, functional dimension of reuse is exemplified in Antalya Kaleiçi where has a special location and importance with its natural, cultural and historical heritage characteristics. Antayla Kaleiçi settlement preserves its liveliness as a touristic urban fabric with its almost fifty thousand years of past, traditional urban form, civil architectural examples of 18th–19th century reflecting the life style of the region and monumental buildings. The civil architectural examples in the fabric have a special character formed according to Mediterranean climate with their outer sofa (open or closed), one, two or three storey, courtyards and oriels. In the study reuse of five civil architectural examples as boutique hotel by forming a whole with their environmental arrangements is investigated, it is analyzed how the spatial requirements of a boutique hotel are fulfilled in traditional dwellings. Usage of a cultural entity as a boutique hotel is evaluated under the headlines of i.functional requirement, ii.satisfactoriness of spatial dimensions, iii.functional organization. There are closed and open restaurant, kitchen, pub, lobby, administrative offices in the hotel with 70 bed capacity and 28 rooms in total. There are expansions to urban areas on second and third floors by the means of oriels in the hotel surrounded by narrow streets in three directions. This boutique hotel, formed by unique five different dwellings having similar plan scheme in traditional fabric, is different with its structure opened to outside and connected to each other by the means of courtyards, and its outside spaces which gained mobility because of the elevation differences in courtyards.Keywords: reuse, adaptive reuse, functional dimension of reuse, traditional dwellings
Procedia PDF Downloads 31894 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 28393 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 22492 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 5791 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference
Authors: Nasser S. Shebka
Abstract:
Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities
Procedia PDF Downloads 9090 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 15589 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein
Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel
Abstract:
Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome
Procedia PDF Downloads 19888 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 21487 Strategic Interventions to Combat Socio-economic Impacts of Drought in Thar - A Case Study of Nagarparkar
Authors: Anila Hayat
Abstract:
Pakistan is one of those developing countries that are least involved in emissions but has the most vulnerable environmental conditions. Pakistan is ranked 8th in most affected countries by climate change on the climate risk index 1992-2011. Pakistan is facing severe water shortages and flooding as a result of changes in rainfall patterns, specifically in the least developed areas such as Tharparkar. Nagarparkar, once an attractive tourist spot located in Tharparkar because of its tropical desert climate, is now facing severe drought conditions for the last few decades. This study investigates the present socio-economic situation of local communities, major impacts of droughts and their underlying causes and current mitigation strategies adopted by local communities. The study uses both secondary (quantitative in nature) and primary (qualitative in nature) methods to understand the impacts and explore causes on the socio-economic life of local communities of the study area. The relevant data has been collected through household surveys using structured questionnaires, focus groups and in-depth interviews of key personnel from local and international NGOs to explore the sensitivity of impacts and adaptation to droughts in the study area. This investigation is limited to four rural communities of union council Pilu of Nagarparkar district, including Bheel, BhojaBhoon, Mohd Rahan Ji Dhani and Yaqub Ji Dhani villages. The results indicate that drought has caused significant economic and social hardships for the local communities as more than 60% of the overall population is dependent on rainfall which has been disturbed by irregular rainfall patterns. The decline in Crop yields has forced the local community to migrate to nearby areas in search of livelihood opportunities. Communities have not undertaken any appropriate adaptive actions to counteract the adverse effect of drought; they are completely dependent on support from the government and external aid for survival. Respondents also reported that poverty is a major cause of their vulnerability to drought. An increase in population, limited livelihood opportunities, caste system, lack of interest from the government sector, unawareness shaped their vulnerability to drought and other social issues. Based on the findings of this study, it is recommended that the local authorities shall create awareness about drought hazards and improve the resilience of communities against drought. It is further suggested to develop, introduce and implement water harvesting practices at the community level to promote drought-resistant crops.Keywords: migration, vulnerability, awareness, Drought
Procedia PDF Downloads 13286 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 41585 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 16484 The Effect of Acute Muscular Exercise and Training Status on Haematological Indices in Adult Males
Authors: Ibrahim Musa, Mohammed Abdul-Aziz Mabrouk, Yusuf Tanko
Abstract:
Introduction: Long term physical training affect the performance of athletes especially the females. Soccer which is a team sport, played in an outdoor field, require adequate oxygen transport system for the maximal aerobic power during exercise in order to complete 90 minutes of competitive play. Suboptimal haematological status has often been recorded in athletes with intensive physical activity. It may be due to the iron depletion caused by hemolysis or haemodilution results from plasma volume expansion. There is lack of data regarding the dynamics of red blood cell variables, in male football players. We hypothesized that, a long competitive season involving frequent matches and intense training could influence red blood cell variables, as a consequence of applying repeated physical loads when compared with sedentary. Methods: This cross sectional study was carried on 40 adult males (20 athletes and 20 non athletes) between 18-25 years of age. The 20 apparently healthy male non athletes were taken as sedentary and 20 male footballers comprise the study group. The university institutional review board (ABUTH/HREC/TRG/36) gave approval for all procedures in accordance with the Declaration of Helsinki. Red blood cell (RBC) concentration, packed cell volume (PCV), and plasma volume were measured in fasting state and immediately after exercise. Statistical analysis was done by using SPSS/ win.20.0 for comparison within and between the groups, using student’s paired and unpaired “t” test respectively. Results: The finding from our study shows that, immediately after termination of exercise, the mean RBC counts and PCV significantly (p<0.005) decreased with significant increased (p<0.005) in plasma volume when compared with pre-exercised values in both group. In addition the post exercise RBC was significantly higher in untrained (261.10±8.5) when compared with trained (255.20±4.5). However, there was no significant differences in the post exercise hematocrit and plasma volume parameters between the sedentary and the footballers. Moreover, beside changes in pre-exercise values among the sedentary and the football players, the resting red blood cell counts and Plasma volume (PV %) was significantly (p < 0.05) higher in the sedentary group (306.30±10.05 x 104 /mm3; 58.40±0.54%) when compared with football players (293.70±4.65 x 104 /mm3; 55.60±1.18%). On the other hand, the sedentary group exhibited significant (p < 0.05) decrease in PCV (41.60±0.54%) when compared with the football players (44.40±1.18%). Conclusions: It is therefore proposed that the acute football exercise induced reduction in RBC and PCV is entirely due to plasma volume expansion, and not of red blood cell hemolysis. In addition, the training status also influenced haematological indices of male football players differently from the sedentary at rest due to adaptive response. This is novel.Keywords: Haematological Indices, Performance Status, Sedentary, Male Football Players
Procedia PDF Downloads 25483 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 16482 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 37481 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie
Authors: Olubisi Friday Oluduro
Abstract:
The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework
Procedia PDF Downloads 15680 Mirna Expression Profile is Different in Human Amniotic Mesenchymal Stem Cells Isolated from Obese Respect to Normal Weight Women
Authors: Carmela Nardelli, Laura Iaffaldano, Valentina Capobianco, Antonietta Tafuto, Maddalena Ferrigno, Angela Capone, Giuseppe Maria Maruotti, Maddalena Raia, Rosa Di Noto, Luigi Del Vecchio, Pasquale Martinelli, Lucio Pastore, Lucia Sacchetti
Abstract:
Maternal obesity and nutrient excess in utero increase the risk of future metabolic diseases in the adult life. The mechanisms underlying this process are probably based on genetic, epigenetic alterations and changes in foetal nutrient supply. In mammals, the placenta is the main interface between foetus and mother, it regulates intrauterine development, modulates adaptive responses to sub optimal in uterus conditions and it is also an important source of human amniotic mesenchymal stem cells (hA-MSCs). We previously highlighted a specific microRNA (miRNA) profiling in amnion from obese (Ob) pregnant women, here we compared the miRNA expression profile of hA-MSCs isolated from (Ob) and control (Co) women, aimed to search for any alterations in metabolic pathways that could predispose the new-born to the obese phenotype. Methods: We isolated, at delivery, hA-MSCs from amnion of 16 Ob- and 7 Co-women with pre-pregnancy body mass index (mean/SEM) 40.3/1.8 and 22.4/1.0 kg/m2, respectively. hA-MSCs were phenotyped by flow cytometry. Globally, 384 miRNAs were evaluated by the TaqMan Array Human MicroRNA Panel v 1.0 (Applied Biosystems). By the TargetScan program we selected the target genes of the miRNAs differently expressed in Ob- vs Co-hA-MSCs; further, by KEGG database, we selected the statistical significant biological pathways. Results: The immunophenotype characterization confirmed the mesenchymal origin of the isolated hA-MSCs. A large percentage of the tested miRNAs, about 61.4% (232/378), was expressed in hA-MSCs, whereas 38.6% (146/378) was not. Most of the expressed miRNAs (89.2%, 207/232) did not differ between Ob- and Co-hA-MSCs and were not further investigated. Conversely, 4.8% of miRNAs (11/232) was higher and 6.0% (14/232) was lower in Ob- vs Co-hA-MSCs. Interestingly, 7/232 miRNAs were obesity-specific, being expressed only in hA-MSCs isolated from obese women. Bioinformatics showed that these miRNAs significantly regulated (P<0.001) genes belonging to several metabolic pathways, i.e. MAPK signalling, actin cytoskeleton, focal adhesion, axon guidance, insulin signaling, etc. Conclusions: Our preliminary data highlight an altered miRNA profile in Ob- vs Co-hA-MSCs and suggest that an epigenetic miRNA-based mechanism of gene regulation could affect pathways involved in placental growth and function, thereby potentially increasing the newborn’s risk of metabolic diseases in the adult life.Keywords: hA-MSCs, obesity, miRNA, biosystem
Procedia PDF Downloads 52679 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings
Authors: Abdulwakeel B. Raji
Abstract:
This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence
Procedia PDF Downloads 13478 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia
Authors: S. Masud, J. Merson, D. F. Robinson
Abstract:
The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions
Procedia PDF Downloads 28577 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 283