Search results for: integrated models of reading comprehension
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10532

Search results for: integrated models of reading comprehension

62 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 297
61 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 131
60 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 191
59 Assessment of Natural Flood Management Potential of Sheffield Lakeland to Flood Risks Using GIS: A Case Study of Selected Farms on the Upper Don Catchment

Authors: Samuel Olajide Babawale, Jonathan Bridge

Abstract:

Natural Flood Management (NFM) is promoted as part of sustainable flood management (SFM) in response to climate change adaptation. Stakeholder engagement is central to this approach, and current trends are progressively moving towards a collaborative learning approach where stakeholder participation is perceived as one of the indicators of sustainable development. Within this methodology, participation embraces a diversity of knowledge and values underpinned by a philosophy of empowerment, equity, trust, and learning. To identify barriers to NFM uptake, there is a need for a new understanding of how stakeholder participation could be enhanced to benefit individual and community resilience within SFM. This is crucial in light of climate change threats and scientific reliability concerns. In contributing to this new understanding, this research evaluated the proposed interventions on six (6) UK NFM in a catchment known as the Sheffield Lakeland Partnership Area with reference to the Environment Agency Working with Natural Processes (WWNP) Potentials/Opportunities. Three of the opportunities, namely Run-off Attenuation Potential of 1%, Run-off Attenuation Potential of 3.3% and Riparian Woodland Potential, were modeled. In all the models, the interventions, though they have been proposed or already in place, are not in agreement with the data presented by EA WWNP. Findings show some institutional weaknesses, which are seen to inhibit the development of adequate flood management solutions locally with damaging implications for vulnerable communities. The gap in communication from practitioners poses a challenge to the implementation of real flood mitigating measures that align with the lead agency’s nationally accepted measures which are identified as not feasible by the farm management officers within this context. Findings highlight a dominant top-bottom approach to management with very minimal indication of local interactions. Current WWNP opportunities have been termed as not realistic by the people directly involved in the daily management of the farms, with less emphasis on prevention and mitigation. The targeted approach suggested by the EA WWNP is set against adaptive flood management and community development. The study explores dimensions of participation using the self-reliance and self-help approach to develop a methodology that facilitates reflections of currently institutionalized practices and the need to reshape spaces of interactions to enable empowered and meaningful participation. Stakeholder engagement and resilience planning underpin this research. The findings of the study suggest different agencies have different perspectives on “community participation”. It also shows communities in the case study area appear to be least influential, denied a real chance of discussing their situations and influencing the decisions. This is against the background that the communities are in the most productive regions, contributing massively to national food supplies. The results are discussed concerning practical implications for addressing interagency partnerships and conducting grassroots collaborations that empower local communities and seek solutions to sustainable development challenges. This study takes a critical look into the challenges and progress made locally in sustainable flood risk management and adaptation to climate change by the United Kingdom towards achieving the global 2030 agenda for sustainable development.

Keywords: natural flood management, sustainable flood management, sustainable development, working with natural processes, environment agency, run-off attenuation potential, climate change

Procedia PDF Downloads 72
58 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.

Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence

Procedia PDF Downloads 108
57 Chain Networks on Internationalization of SMEs: Co-Opetition Strategies in Agrifood Sector

Authors: Emilio Galdeano-Gómez, Juan C. Pérez-Mesa, Laura Piedra-Muñoz, María C. García-Barranco, Jesús Hernández-Rubio

Abstract:

The situation in which firms engage in simultaneous cooperation and competition with each other is a phenomenon known as co-opetition. This scenario has received increasing attention in business economics and management analyses. In the domain of supply chain networks and for small and medium-sized enterprises, SMEs, these strategies are of greater relevance given the complex environment of globalization and competition in open markets. These firms face greater challenges regarding technology and access to specific resources due to their limited capabilities and limited market presence. Consequently, alliances and collaborations with both buyers and suppliers prove to be key elements in overcoming these constraints. However, rivalry and competition are also regarded as major factors in successful internationalization processes, as they are drivers for firms to attain a greater degree of specialization and to improve efficiency, for example enabling them to allocate scarce resources optimally and providing incentives for innovation and entrepreneurship. The present work aims to contribute to the literature on SMEs’ internationalization strategies. The sample is constituted by a panel data of marketing firms from the Andalusian food sector and a multivariate regression analysis is developed, measuring variables of co-opetition and international activity. The hierarchical regression equations method has been followed, thus resulting in three estimated models: the first one excluding the variables indicative of channel type, while the latter two include the international retailer chain and wholesaler variable. The findings show that the combination of several factors leads to a complex scenario of inter-organizational relationships of cooperation and competition. In supply chain management analyses, these relationships tend to be classified as either buyer-supplier (vertical level) or supplier-supplier relationships (horizontal level). Several buyers and suppliers tend to participate in supply chain networks, and in which the form of governance (hierarchical and non-hierarchical) influences cooperation and competition strategies. For instance, due to their market power and/or their closeness to the end consumer, some buyers (e.g. large retailers in food markets) can exert an influence on the selection and interaction of several of their intermediate suppliers, thus endowing certain networks in the supply chain with greater stability. This hierarchical influence may in turn allow these suppliers to develop their capabilities (e.g. specialization) to a greater extent. On the other hand, for those suppliers that are outside these networks, this environment of hierarchy, characterized by a “hub firm” or “channel master”, may provide an incentive for developing their co-opetition relationships. These results prove that the analyzed firms have experienced considerable growth in sales to new foreign markets, mainly in Europe, dealing with large retail chains and wholesalers as main buyers. This supply industry is predominantly made up of numerous SMEs, which has implied a certain disadvantage when dealing with the buyers, as negotiations have traditionally been held on an individual basis and in the face of high competition among suppliers. Over recent years, however, cooperation among these marketing firms has become more common, for example regarding R&D, promotion, scheduling of production and sales.

Keywords: co-petition networks, international supply chain, maketing agrifood firms, SMEs strategies

Procedia PDF Downloads 79
56 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica

Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat

Abstract:

Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.

Keywords: carmustine, silica, controlled, release

Procedia PDF Downloads 264
55 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
54 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience

Authors: Nancy Mackin

Abstract:

Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.

Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge

Procedia PDF Downloads 522
53 Ethnic Andean Concepts of Health and Illness in the Post-Colombian World and Its Relevance Today

Authors: Elizabeth J. Currie, Fernando Ortega Perez

Abstract:

—‘MEDICINE’ is a new project funded under the EC Horizon 2020 Marie-Sklodowska Curie Actions, to determine concepts of health and healing from a culturally specific indigenous context, using a framework of interdisciplinary methods which integrates archaeological-historical, ethnographic and modern health sciences approaches. The study will generate new theoretical and methodological approaches to model how peoples survive and adapt their traditional belief systems in a context of alien cultural impacts. In the immediate wake of the conquest of Peru by invading Spanish armies and ideology, native Andeans responded by forming the Taki Onkoy millenarian movement, which rejected European philosophical and ontological teachings, claiming “you make us sick”. The study explores how people’s experience of their world and their health beliefs within it, is fundamentally shaped by their inherent beliefs about the nature of being and identity in relation to the wider cosmos. Cultural and health belief systems and related rituals or behaviors sustain a people’s sense of identity, wellbeing and integrity. In the event of dislocation and persecution these may change into devolved forms, which eventually inter-relate with ‘modern’ biomedical systems of health in as yet unidentified ways. The development of new conceptual frameworks that model this process will greatly expand our understanding of how people survive and adapt in response to cultural trauma. It will also demonstrate the continuing role, relevance and use of TM in present-day indigenous communities. Studies will first be made of relevant pre-Colombian material culture, and then of early colonial period ethnohistorical texts which document the health beliefs and ritual practices still employed by indigenous Andean societies at the advent of the 17th century Jesuit campaigns of persecution - ‘Extirpación de las Idolatrías’. Core beliefs drawn from these baseline studies will then be used to construct a questionnaire about current health beliefs and practices to be taken into the study population of indigenous Quechua peoples in the northern Andean region of Ecuador. Their current systems of knowledge and medicine have evolved within complex historical contexts of both the conquest by invading Inca armies in the late 15th century, followed a generation later by Spain, into new forms. A new model will be developed of contemporary  Andean concepts of health, illness and healing demonstrating  the way these have changed through time. With this, a ‘policy tool’ will be constructed as a bridhging facility into contemporary global scenarios relevant to other Indigenous, First Nations, and migrant peoples to provide a means through which their traditional health beliefs and current needs may be more appropriately understood and met. This paper presents findings from the first analytical phases of the work based upon the study of the literature and the archaeological records. The study offers a novel perspective and methods in the development policies sensitive to indigenous and minority people’s health needs.

Keywords: Andean ethnomedicine, Andean health beliefs, health beliefs models, traditional medicine

Procedia PDF Downloads 346
52 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 73
51 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China

Authors: Liuhui Zhu, Peng Zeng

Abstract:

With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.

Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model

Procedia PDF Downloads 136
50 A Study on the Relation among Primary Care Professionals Serving Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the country. The elevated death and disease among ex-slaves were attributable to the unavailability of healthcare. To address the paucity of healthcare services, the College, an institution with the mission of educating minority professionals and serving the under served population, was established in 1876. This study was designed to assess if the College has accomplished its mission of serving under served communities and contributed to the elimination of health disparities in the United States. The study objective was to quantify the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities, which, in turn, was significantly associated with a health professional shortage score partly designated by the U.S. Department of Health and Human Services. Various statistical methods were used to analyze the alumni data in years 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates into the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t test was performed to detect the significant mean differences for clustering and criterion variables between Disadvantaged and Non-disadvantaged Communities, which confirms the “content” validity of cluster analysis model. Chi-square test was used to assess if the proportion of cluster groups (Disadvantaged vs Non-disadvantaged Communities) were consistent with that of practicing specialties (primary care vs. non-primary care). Finally, the partial least squares (PLS) path model was constructed to explore the “construct” validity of analytics model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving disadvantaged community. The social ecological theory along with statistical models mentioned was used to establish the relationship between medical and dental graduates (primary care professionals serving disadvantaged communities) and their social environments (socioeconomic status, adverse health outcome, health professional shortage score). Based on social ecological framework, it was hypothesized that the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities could be quantified. Also, primary care professionals serving disadvantaged communities related to a health professional shortage score can be measured. Adverse health outcome (adult obesity rate, age-adjusted premature mortality rate, and percent of people diagnosed with diabetes) could be affected by the latent variable, namely socioeconomic status (unemployment rate, poverty rate, percent of children who were in free lunch programs, and percent of uninsured adults). The study results indicated that approximately 83% (3,192/3,864) of the College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. In addition, the PLS path modeling demonstrated that primary care professionals serving disadvantaged community was significantly associated with socioeconomic status and adverse health outcome (p < .001). In summary, the majority of medical and dental graduates from the College provide primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcomes, which demonstrate that the College has fulfilled its mission.

Keywords: disadvantaged community, K-means cluster analysis, PLS path modeling, primary care

Procedia PDF Downloads 550
49 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 152
48 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 31
47 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 313
46 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 64
45 A Multimodal Discourse Analysis of Gender Representation on Health and Fitness Magazine Cover Pages

Authors: Nashwa Elyamany

Abstract:

In visual cultures, namely that of the United States, media representations are such influential and pervasive reflections of societal norms and expectations to the extent that they impact the manner in which both genders view themselves. Health and fitness magazines fall within the realm of visual culture. Since the main goal of communication is to ensure proper dissemination of information in order for the target audience to grasp the intended messages, it becomes imperative that magazine publishers, editors, advertisers and image producers use different modes of communication within their reach to convey messages to their readers and viewers. A rapid waxing flow of multimodality floods popular discourse, particularly health and fitness magazine cover pages. The use of well-crafted cover lines and visual images is imbued with agendas, consumerist ideologies and properties capable of effectively conveying implicit and explicit meaning to potential readers and viewers. In essence, the primary goal of this thesis is to interrogate the multi-semiotic operations and manifestations of hegemonic masculinity and femininity in male and female body culture, particularly on the cover pages of the twin American magazines Men's Health and Women's Health using corpora that spanned from 2011 to the mid of 2016. The researcher explores the semiotic resources that contribute to shaping and legitimizing a new form of postmodern, consumerist, gendered discourse that positions the reader-viewer ideologically. Methodologically, the researcher carries out analysis on the macro and micro levels. On the macro level, the researcher takes on a critical stance to illuminate the ideological nature of the multimodal ensemble of the cover pages, and, on the micro level, seeks to put forward new theoretical and methodological routes through which the semiotic choices well invested on the media texts can be more objectively scrutinized. On the macro level, a 'themes' analysis is initially conducted to isolate the overarching themes that dominate the fitness discourse on the cover pages under study. It is argued that variation in terms of frequencies of such themes is indicative, broadly speaking, of which facets of hegemonic masculinity and femininity are infused in the fitness discourse on the cover pages. On the micro level, this research work encompasses three sub-levels of analysis. The researcher follows an SF-MMDA approach, drawing on a trio of analytical frameworks: Halliday's SFG for the verbal analysis; Kress & van Leeuween's VG for the visual analysis; and CMT in relation to Sperber & Wilson's RT for the pragma-cognitive analysis of multimodal metaphors and metonymies. The data is presented in terms of detailed descriptions in conjunction with frequency tables, ANOVA with alpha=0.05 and MANOVA in the multiple phases of analysis. Insights and findings from this multi-faceted, social-semiotic analysis are interpreted in light of Cultivation Theory, Self-objectification Theory and the literature to date. Implications for future research include the implementation of a multi-dimensional approach whereby linguistic and visual analytical models are deployed with special regards to cultural variation.

Keywords: gender, hegemony, magazine cover page, multimodal discourse analysis, multimodal metaphor, multimodal metonymy, systemic functional grammar, visual grammar

Procedia PDF Downloads 349
44 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 142
43 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 33
42 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 286
41 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63
40 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School

Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom

Abstract:

The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.

Keywords: productivity, quality, learning management system, agency-based graduate school

Procedia PDF Downloads 319
39 Digital Geological Map of the Loki Crystalline Massif (The Caucasus) and Its Multi-Informative Explanatory Note

Authors: Irakli Gamkrelidze, David Shengelia, Giorgi Chichinadze, Tamara Tsutsunava, Giorgi Beridze, Tamara Tsamalashvili, Ketevan Tedliashvili, Irakli Javakhishvili

Abstract:

The Caucasus is situated between the Eurasian and Africa-Arabian plates and represents a component of the Mediterranean (Alpine-Himalayan) collision belt. The Loki crystalline massif crops out within one of the terranes of the Caucasus – Baiburt-Sevanian terrane. By the end of 2018, a digital geological map (1:50 000) of the Loki massif was compiled. The presented map is of great importance for the region since there is no large-scale geological map which reflects the present standards of the geological study of the massif up to the last time. The existing State Geological Map of the Loki massif is very outdated. A new map drown by using GIS (Geographic Information System) technology is loaded with multi-informative details that include: specified contours of geological units and separate tectonic scales, key mineral assemblages and facies of metamorphism, temperature conditions of metamorphism, ages of metamorphism events and the massif rocks, genetic-geodynamic types of magmatic rocks. Explanatory note, attached to the map includes the large specter of scientific information. It contains characterization of the geological setting, composition and petrogenetic and geodynamic models of the massif formation. To create a geological map of the Loki crystalline massif, appropriate methodologies were applied: a sampling of rocks, GIS technology-based mapping of geological units, microscopic description of the material, composition analysis of rocks, microprobe analysis of minerals and a new interpretation of obtained data. To prepare a digital version of the map the appropriated activities were held including the creation of a common database. Finally, the design was created that includes the elaboration of legend and the final visualization of the map. The results of the study presented in the explanatory note are given below. The autochthonous gneissose quartz diorites of normal alkalinity and sub-alkaline gabbro-diorites included in them belong to different phases of magmatism. They represent “igneous” granites corresponding to mixed mantle-crustal type granites. Four tectonic plates of the allochthonous metamorphic complex–Lower Gorastskali, Sapharlo–Lok-Jandari, Moshevani, and Lower Gorastskali differ from each other by structure and degree of metamorphism. The initial rocks of these plates are formed in different geodynamic conditions and during the Early Bretonian orogeny while overthrusting due to tectonic compression they form a thick tectonic sheet. The Lower Gorastskali overthrust sheet is a fragment of ophiolitic association corresponding to the Paleotethys oceanic crust. The protolith of the ophiolitic complex basites corresponds to the tholeiitic series of basalts. The Sapharlo–Lok-Jandari overthrust sheet is metapelites, metamorphosed in conditions of greenschist facies of regional metamorphism. The regional metamorphism of Moshevani overthrust sheet crystalline schists quartzites corresponds to a range from greenschist to hornfels facies. The “mélange” is built of rock fragments and blocks of above-mentioned overthrust sheets. Sub-alkaline and normal alkaline post-metamorphic granites of the Loki crystalline massif belong to “igneous” and rarely to “sialic” and “anorogenic” types of granites.

Keywords: digital geological map, 1:50 000 scale, crystalline massif, the caucasus

Procedia PDF Downloads 172
38 WASH Governance Opportunity for Inspiring Innovation and a Circular Economy in Karnali Province of Nepal

Authors: Nirajan Shrestha

Abstract:

Karnali is one of the most vulnerable provinces in Nepal, facing challenges from climate change, poverty, and natural calamities across different regions. In recent years, the province has been severely impacted by climate change stress such as temperature rises in glacier lake of mountainous region and spring source water shortages, particularly in hilly areas where settlements are located, and water sources have depleted from their original ground levels. As a result, Karnali could face a future without enough water for all. Deep causes of sustainable safe water supply have always been neglected in rural areas of Nepal, and communities are unfairly burdened with a challenge of keeping water facilities functioning in areas affected by frequent natural disasters where there is a substantial, well-documented funding gap between the revenues from user payments and the full cost of sustained services. The key importance of a permanent system to support communities in service delivery has been always underrated so far. The complexity of water service sustainability as a topic should be simplified to one clear indicator: the functionality rate, which can be expressed as uptime or the percentage of time that the service is delivered over the total time. For example, a functionality rate of 80% means that the water service is operational 80% of the time, while 20% of the time the system is not functioning. This represents 0.2 multiplied by 365, which equals 73 days every year, or roughly two and a half months without water. This percentage should be widely understood and used in Karnali. All local governments should report their targets and performance in improving it, and there should be a broader discussion about what target is acceptable and what can be realistically achieved. In response to these challenges, the Sustainable WASH for All (SUSWA) project has introduced innovative models and policy formulation strategies in various working local government. SUSWA’s approach, which delegates rural water supply and sanitation responsibilities to local governments, has been instrumental in addressing these issues. To keep pace with the growing demand, the province has adopted a service support center model, linking local governments with federal authorities to ensure effective service delivery to the communities By enhancing WASH governance through local governments engagement, capacity building and inclusive WASH policy frameworks, there is potential to address WASH gaps while fostering a circular economy. This strategy emphasizes resource recovery, waste minimization and the creation of local employment generation opportunities. The research highlights key governance mechanisms, innovative practices and policy interventions that can be scaled up across other regions. It also provides recommendations on how to leverage Karnali’s unique socio-economic and environmental context nature-based solutions to inspire innovation and drive sustainable WASH solutions. Key findings suggest that with strong ownership and leadership of local governments, community engagement and appropriate technology, Karnali Province can become a model for integrating WASH governance with circular economy concept, providing broader lessons for other regions in Nepal.

Keywords: vulnerable provinces, natural calamities, climate change stres, spring source depletion, resources recovery, governance mechanisms, appropriate technology, community engagement, innovation

Procedia PDF Downloads 14
37 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant

Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro

Abstract:

The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.

Keywords: acoustic impact, industrial noise, mitigation, rocket noise

Procedia PDF Downloads 146
36 Targeting Matrix Metalloprotease-9 to Reduce Coronary Artery Manifestations of Kawasaki’s Disease

Authors: Mohammadjavad Sotoudeheian, Navid Farahmandian

Abstract:

Kawasaki disease (KD) is the primary cause of acquired pediatric heart disease as an acute vasculitis. In children with prolonged fever, rash, and inflammation of the mucosa KD must be considered as a clinical diagnosis. There is a persuasive suggestion of immune-mediated damage as the pathophysiologic cascade of KD. For example, the invasion of cytotoxic T-cells supports a viral etiology and the inflammasome of the innate immune system is a critical component in the vasculitis formation in KD. Animal models of KD propose the cytokine profiles, such as increased IL-1 and GM-CSF, which cause vascular damage. CRP and IFN-γ elevated expression and the upregulation of IL-6, and IL-10 production are also described in previous studies. Untreated KD is a critical risk factor for coronary artery diseases and myocardial infarction. Vascular damage may encompass amplified T-cell activity. SMAD3 is an essential molecule in down-regulating T-cells and increasing expression of FoxP3. It has a critical effect in the differentiation of regulatory T-cells. The discrepancy of regulatory T-cells and pro-inflammatory Th17 has been studied in acute coronary syndrome during KD. However in the coronary artery damaged lymphocytes and IgA plasma cells are seen at the lesion locations, the major immune cells in the coronary lesions are monocytes/macrophages and neutrophils. These cells secrete TNF-α, and activates matrix metalloprotease (MMP)-9, reducing the integrity of vessels and prompting patients to arise aneurysm. MMPs can break down the components of the extracellular matrix and assist immune cell movement. IVIG as an effective form of treatment clarified the role of the immune system, which may target pathogenic antigens and regulate cytokine production. Several reports have revealed that in the coronary arteries, high expression of MMP-9 in monocyte/macrophage results in pathologic cascades. Curcumin is a potent antioxidant and anti-inflammatory molecule. Curcumin decreases the production of reactive oxygen and nitrogen species and inhibits transcription factors like AP-1 and NF-κB. Curcumin also contains the characteristics of inhibitory effects on MMPs, especially MMP-9. The upregulation of MMP-9 is an important cellular response. Curcumin treatment caused a reverse effect and down-regulates MMP-9 gene expression which may fund the anti-inflammatory effect. Curcumin inhibits MMP-9 expression via PKC and AMPK-dependent pathways in Human monocytes cells. Elevated expression and activity of MMP-9 are correlated with advanced vascular lesions. AMPK controls lipid metabolism and oxidation, and protein synthesis. AMPK is also necessary for the MMP-9 activity and THP-1 cell adhesion to endothelial cells. Curcumin was shown to inhibit the activation of AMPKα. Compound C (AMPK inhibitor) inhibits MMP-9 expression level. Therefore, through inactivating AMPKs and PKC, curcumin decreases the MMP-9 level, which results in inhibiting monocyte/macrophage differentiation. Compound C also suppress the phosphorylation of three major classes of MAP kinase signaling, suggesting that curcumin may suppress MMP-9 level by inactivation of MAPK pathways. MAPK cascades are activated to induce the expression of MMP-9. Curcumin inhibits MAPKs phosphorylation, which contributes to the down-regulation of MMP-9. This study demonstrated that the potential inhibitory properties of curcumin over MMP-9 lead to a therapeutic strategy to reduce the risk of coronary artery involvement during KD.

Keywords: MMP-9, coronary artery aneurysm, Kawasaki’s disease, curcumin, AMPK, immune system, NF-κB, MAPK

Procedia PDF Downloads 304
35 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 223
34 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 231
33 Impacts of School-Wide Positive Behavioral Interventions and Supports on Student Academics, Behavior and Mental Health

Authors: Catherine Bradshaw

Abstract:

Educators often report difficulty managing behavior problems and other mental health concerns that students display at school. These concerns also interfere with the learning process and can create distraction for teachers and other students. As such, schools play an important role in both preventing and intervening with students who experience these types of challenges. A number of models have been proposed to serve as a framework for delivering prevention and early intervention services in schools. One such model is called Positive Behavioral Interventions and Supports (PBIS), which has been scaled-up to over 26,000 schools in the U.S. and many other countries worldwide. PBIS aims to improve a range of student outcomes through early detection of and intervention related to behavioral and mental health symptoms. PBIS blends and applies social learning, behavioral, and organizational theories to prevent disruptive behavior and enhance the school’s organizational health. PBIS focuses on creating and sustaining tier 1 (universal), tier 2 (selective), and tier 3 (individual) systems of support. Most schools using PBIS have focused on the core elements of the tier 1 supports, which includes the following critical features. The formation of a PBIS team within the school to lead implementation. Identification and training of a behavioral support ‘coach’, who serves as a on-site technical assistance provider. Many of the individuals identified to serve as a PBIS coach are also trained as a school psychologist or guidance counselor; coaches typically have prior PBIS experience and are trained to conduct functional behavioral assessments. The PBIS team also identifies a set of three to five positive behavioral expectations that are implemented for all students and by all staff school-wide (e.g., ‘be respectful, responsible, and ready to learn’); these expectations are posted in all settings across the school, including in the classroom, cafeteria, playground etc. All school staff define and teach the school-wide behavioral expectations to all students and review them regularly. Finally, PBIS schools develop or adopt a school-wide system to reward or reinforce students who demonstrate those 3-5 positive behavioral expectations. Staff and administrators create an agreed upon system for responding to behavioral violations that include definitions about what constitutes a classroom-managed vs. an office-managed discipline problem. Finally, a formal system is developed to collect, analyze, and use disciplinary data (e.g., office discipline referrals) to inform decision-making. This presentation provides a brief overview of PBIS and reports findings from a series of four U.S. based longitudinal randomized controlled trials (RCTs) documenting the impacts of PBIS on school climate, discipline problems, bullying, and academic achievement. The four RCTs include 80 elementary, 40 middle, and 58 high schools and results indicate a broad range of impacts on multiple student and school-wide outcomes. The session will highlight lessons learned regarding PBIS implementation and scale-up. We also review the ways in which PBIS can help educators and school leaders engage in data-based decision-making and share data with other decision-makers and stakeholders (e.g., students, parents, community members), with the overarching goal of increasing use of evidence-based programs in schools.

Keywords: positive behavioral interventions and supports, mental health, randomized trials, school-based prevention

Procedia PDF Downloads 229