Search results for: natural language grammar models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14698

Search results for: natural language grammar models

88 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites

Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem

Abstract:

The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.

Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE

Procedia PDF Downloads 22
87 Linking the Genetic Signature of Free-Living Soil Diazotrophs with Process Rates under Land Use Conversion in the Amazon Rainforest

Authors: Rachel Danielson, Brendan Bohannan, S.M. Tsai, Kyle Meyer, Jorge L.M. Rodrigues

Abstract:

The Amazon Rainforest is a global diversity hotspot and crucial carbon sink, but approximately 20% of its total extent has been deforested- primarily for the establishment of cattle pasture. Understanding the impact of this large-scale disturbance on soil microbial community composition and activity is crucial in understanding potentially consequential shifts in nutrient or greenhouse gas cycling, as well as adding to the body of knowledge concerning how these complex communities respond to human disturbance. In this study, surface soils (0-10cm) were collected from three forests and three 45-year-old pastures in Rondonia, Brazil (the Amazon state with the greatest rate of forest destruction) in order to determine the impact of forest conversion on microbial communities involved in nitrogen fixation. Soil chemical and physical parameters were paired with measurements of microbial activity and genetic profiles to determine how community composition and process rates relate to environmental conditions. Measuring both the natural abundance of 15N in total soil N, as well as incorporation of enriched 15N2 under incubation has revealed that conversion of primary forest to cattle pasture results in a significant increase in the rate of nitrogen fixation by free-living diazotrophs. Quantification of nifH gene copy numbers (an essential subunit encoding the nitrogenase enzyme) correspondingly reveals a significant increase of genes in pasture compared to forest soils. Additionally, genetic sequencing of both nifH genes and transcripts shows a significant increase in the diversity of the present and metabolically active diazotrophs within the soil community. Levels of both organic and inorganic nitrogen tend to be lower in pastures compared to forests, with ammonium rather than nitrate as the dominant inorganic form. However, no significant or consistent differences in total, extractable, permanganate-oxidizable, or loss-on-ignition carbon are present between the two land-use types. Forest conversion is associated with a 0.5- 1.0 unit pH increase, but concentrations of many biologically relevant nutrients such as phosphorus do not increase consistently. Increases in free-living diazotrophic community abundance and activity appear to be related to shifts in carbon to nitrogen pool ratios. Furthermore, there may be an important impact of transient, low molecular weight plant-root-derived organic carbon on free-living diazotroph communities not captured in this study. Preliminary analysis of nitrogenase gene variant composition using NovoSeq metagenomic sequencing indicates that conversion of forest to pasture may significantly enrich vanadium-based nitrogenases. This indication is complemented by a significant decrease in available soil molybdenum. Very little is known about the ecology of diazotrophs utilizing vanadium-based nitrogenases, so further analysis may reveal important environmental conditions favoring their abundance and diversity in soil systems. Taken together, the results of this study indicate a significant change in nitrogen cycling and diazotroph community composition with the conversion of the Amazon Rainforest. This may have important implications for the sustainability of cattle pastures once established since nitrogen is a crucial nutrient for forage grass productivity.

Keywords: free-living diazotrophs, land use change, metagenomic sequencing, nitrogen fixation

Procedia PDF Downloads 171
86 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications

Authors: Deon Bezuidenhout

Abstract:

Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.

Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft

Procedia PDF Downloads 271
85 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 282
84 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 106
83 Cultural Dynamics in Online Consumer Behavior: Exploring Cross-Country Variances in Review Influence

Authors: Eunjung Lee

Abstract:

This research investigates the intricate connection between cultural differences and online consumer behaviors by integrating Hofstede's Cultural Dimensions theory with analysis methodologies such as text mining, data mining, and topic analysis. Our aim is to provide a comprehensive understanding of how national cultural differences influence individuals' behaviors when engaging with online reviews. To ensure the relevance of our investigation, we systematically analyze and interpret the cultural nuances influencing online consumer behaviors, especially in the context of online reviews. By anchoring our research in Hofstede's Cultural Dimensions theory, we seek to offer valuable insights for marketers to tailor their strategies based on the cultural preferences of diverse global consumer bases. In our methodology, we employ advanced text mining techniques to extract insights from a diverse range of online reviews gathered globally for a specific product or service like Netflix. This approach allows us to reveal hidden cultural cues in the language used by consumers from various backgrounds. Complementing text mining, data mining techniques are applied to extract meaningful patterns from online review datasets collected from different countries, aiming to unveil underlying structures and gain a deeper understanding of the impact of cultural differences on online consumer behaviors. The study also integrates topic analysis to identify recurring subjects, sentiments, and opinions within online reviews. Marketers can leverage these insights to inform the development of culturally sensitive strategies, enhance target audience segmentation, and refine messaging approaches aligned with cultural preferences. Anchored in Hofstede's Cultural Dimensions theory, our research employs sophisticated methodologies to delve into the intricate relationship between cultural differences and online consumer behaviors. Applied to specific cultural dimensions, such as individualism vs. collectivism, masculinity vs. femininity, uncertainty avoidance, and long-term vs. short-term orientation, the study uncovers nuanced insights. For example, in exploring individualism vs. collectivism, we examine how reviewers from individualistic cultures prioritize personal experiences while those from collectivistic cultures emphasize communal opinions. Similarly, within masculinity vs. femininity, we investigate whether distinct topics align with cultural notions, such as robust features in masculine cultures and user-friendliness in feminine cultures. Examining information-seeking behaviors under uncertainty avoidance reveals how cultures differ in seeking detailed information or providing succinct reviews based on their comfort with ambiguity. Additionally, in assessing long-term vs. short-term orientation, the research explores how cultural focus on enduring benefits or immediate gratification influences reviews. These concrete examples contribute to the theoretical enhancement of Hofstede's Cultural Dimensions theory, providing a detailed understanding of cultural impacts on online consumer behaviors. As online reviews become increasingly crucial in decision-making, this research not only contributes to the academic understanding of cultural influences but also proposes practical recommendations for enhancing online review systems. Marketers can leverage these findings to design targeted and culturally relevant strategies, ultimately enhancing their global marketing effectiveness and optimizing online review systems for maximum impact.

Keywords: comparative analysis, cultural dimensions, marketing intelligence, national culture, online consumer behavior, text mining

Procedia PDF Downloads 23
82 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 154
81 Nanocarriers Made of Amino Acid Based Biodegradable Polymers: Poly(Ester Amide) and Related Cationic and PEGylating Polymers

Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava

Abstract:

Polymeric nanoparticles-based drug delivery systems and therapeutics have a great potential in the treatment of a numerous diseases, due to they are characterizing the flexible properties which is giving possibility to modify their structures with a complex definition over their structures, compositions and properties. Important characteristics of the polymeric nanoparticles (PNPs) used as drug carriers are high particle’s stability, high carrier capacity, feasibility of encapsulation of both hydrophilic and hydrophobic drugs, and feasibility of variable routes of administration, including oral application and inhalation; NPs are especially effective for intracellular drug delivery since they penetrate into the cells’ interior though endocytosis. A variety of PNPs based drug delivery systems including charged and neutral, degradable and non-degradable polymers of both natural and synthetic origin have been developed. Among these huge varieties the biodegradable PNPs which can be cleared from the body after the fulfillment of their function could be considered as one of the most promising. For intracellular uptake it is highly desirable to have positively charged PNPs since they can penetrate deep into cell membranes. For long-lasting circulation of PNPs in the body it is important they have so called “stealth coatings” to protect them from the attack of immune system of the organism. One of the effective ways to render the PNPs “invisible” for immune system is their PEGylation which represent the process of pretreatment of polyethylene glycol (PEG) on the surface of PNPs. The present work deals with constructing PNPs from amino acid based biodegradable polymers – regular poly(ester amide) (PEA) composed of sebacic acid, leucine and 1,6-hexandiol (labeled as 8L6), cationic PEA composed of sebacic acid, arginine and 1,6-hexandiol (labeled as 8R6), and comb-like co-PEA composed of sebacic acid, malic acid, leucine and 1,6-hexandiol (labeled as PEG-PEA). The PNPs were fabricated using the polymer deposition/solvent displacement (nanoprecipitation) method. The regular PEA 8L6 form stable negatively charged (zeta-potential within 2-12 mV) PNPs of desired size (within 150-200 nm) in the presence of various surfactants (Tween 20, Tween 80, Brij 010, etc.). Blending the PEAs 8L6 and 8R6 gave the 130-140 nm sized positively charged PNPs having zeta-potential within +20 ÷ +28 mV depending 8L6/8R6 ratio. The PEGylating PEA PEG-PEA was synthesized by interaction of epoxy-co-PEA [8L6]0,5-[tES-L6]0,5 with mPEG-amine-2000 The stable and positively charged PNPs were fabricated using pure PEG-PEA as a surfactant. A firm anchoring of the PEG-PEA with 8L6/8R6 based PNPs (owing to a high afinity of the backbones of all three PEAs) provided good stabilization of the NPs. In vitro biocompatibility study of the new PNPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed they are biocompatible. Considering high stability and cell compatibility of the elaborated PNPs one can conclude that they are promising for subsequent therapeutic applications. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 “New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications”.

Keywords: biodegradable poly(ester amide)s, cationic poly(ester amide), pegylating poly(ester amide), nanoparticles

Procedia PDF Downloads 103
80 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.

Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence

Procedia PDF Downloads 82
79 Chain Networks on Internationalization of SMEs: Co-Opetition Strategies in Agrifood Sector

Authors: Emilio Galdeano-Gómez, Juan C. Pérez-Mesa, Laura Piedra-Muñoz, María C. García-Barranco, Jesús Hernández-Rubio

Abstract:

The situation in which firms engage in simultaneous cooperation and competition with each other is a phenomenon known as co-opetition. This scenario has received increasing attention in business economics and management analyses. In the domain of supply chain networks and for small and medium-sized enterprises, SMEs, these strategies are of greater relevance given the complex environment of globalization and competition in open markets. These firms face greater challenges regarding technology and access to specific resources due to their limited capabilities and limited market presence. Consequently, alliances and collaborations with both buyers and suppliers prove to be key elements in overcoming these constraints. However, rivalry and competition are also regarded as major factors in successful internationalization processes, as they are drivers for firms to attain a greater degree of specialization and to improve efficiency, for example enabling them to allocate scarce resources optimally and providing incentives for innovation and entrepreneurship. The present work aims to contribute to the literature on SMEs’ internationalization strategies. The sample is constituted by a panel data of marketing firms from the Andalusian food sector and a multivariate regression analysis is developed, measuring variables of co-opetition and international activity. The hierarchical regression equations method has been followed, thus resulting in three estimated models: the first one excluding the variables indicative of channel type, while the latter two include the international retailer chain and wholesaler variable. The findings show that the combination of several factors leads to a complex scenario of inter-organizational relationships of cooperation and competition. In supply chain management analyses, these relationships tend to be classified as either buyer-supplier (vertical level) or supplier-supplier relationships (horizontal level). Several buyers and suppliers tend to participate in supply chain networks, and in which the form of governance (hierarchical and non-hierarchical) influences cooperation and competition strategies. For instance, due to their market power and/or their closeness to the end consumer, some buyers (e.g. large retailers in food markets) can exert an influence on the selection and interaction of several of their intermediate suppliers, thus endowing certain networks in the supply chain with greater stability. This hierarchical influence may in turn allow these suppliers to develop their capabilities (e.g. specialization) to a greater extent. On the other hand, for those suppliers that are outside these networks, this environment of hierarchy, characterized by a “hub firm” or “channel master”, may provide an incentive for developing their co-opetition relationships. These results prove that the analyzed firms have experienced considerable growth in sales to new foreign markets, mainly in Europe, dealing with large retail chains and wholesalers as main buyers. This supply industry is predominantly made up of numerous SMEs, which has implied a certain disadvantage when dealing with the buyers, as negotiations have traditionally been held on an individual basis and in the face of high competition among suppliers. Over recent years, however, cooperation among these marketing firms has become more common, for example regarding R&D, promotion, scheduling of production and sales.

Keywords: co-petition networks, international supply chain, maketing agrifood firms, SMEs strategies

Procedia PDF Downloads 56
78 Triple Immunotherapy to Overcome Immune Evasion by Tumors in a Melanoma Mouse Model

Authors: Mary-Ann N. Jallad, Dalal F. Jaber, Alexander M. Abdelnoor

Abstract:

Introduction: Current evidence confirms that both innate and adaptive immune systems are capable of recognizing and abolishing malignant cells. The emergence of cancerous tumors in patients is, therefore, an indication that certain cancer cells can resist elimination by the immune system through a process known as “immune evasion”. In fact, cancer cells often exploit regulatory mechanisms to escape immunity. Such mechanisms normally exist to control the immune responses and prohibit exaggerated or autoimmune reactions. Recently, immunotherapies have shown promising yet limited results. Therefore this study investigates several immunotherapeutic combinations and devises a triple immunotherapy which harnesses the innate and acquired immune responses towards the annihilation of malignant cells through overcoming their ability of immune evasion, consequently hampering malignant progression and eliminating established tumors. The aims of the study are to rule out acute/chronic toxic effects of the proposed treatment combinations, to assess the effect of these combinations on tumor growth and survival rates, and to investigate potential mechanisms underlying the phenotypic results through analyzing serum levels of anti-tumor cytokines, angiogenic factors and tumor progression indicator, and the tumor-infiltrating immune-cells populations. Methodology: For toxicity analysis, cancer-free C57BL/6 mice are randomized into 9 groups: Group 1 untreated, group 2 treated with sterile saline (solvent of used treatments), group 3 treated with Monophosphoryl-lipid-A, group 4 with anti-CTLA4-antibodies, group 5 with 1-Methyl-Tryptophan (Indolamine-Dioxygenase-1 inhibitor), group 6 with both MPLA and anti-CTLA4-antibodies, group 7 with both MPLA and 1-MT, group 8 with both anti-CTLA4-antibodies and 1-MT, and group 9 with all three: MPLA, anti-CTLA4-antibodies and 1-MT. Mice are monitored throughout the treatment period and for three following months. At that point, histological sections from their main organs are assessed. For tumor progression and survival analysis, a murine melanoma model is generated by injecting analogous mice with B16F10 melanoma cells. These mice are segregated into the listed nine groups. Their tumor size and survival are monitored. For a depiction of underlying mechanisms, melanoma-bearing mice from each group are sacrificed at several time-points. Sera are tested to assess the levels of Interleukin-12 (IL-12), Vascular-Endothelial-Growth Factor (VEGF), and S100B. Furthermore, tumors are excised for analysis of infiltrated immune cell populations including T-cells, macrophages, natural killer cells and immune-regulatory cells. Results: Toxicity analysis shows that all treated groups present no signs of neither acute nor chronic toxicity. Their appearance and weights were comparable to those of control groups throughout the treatment period and for the following 3 months. Moreover, histological sections from their hearts, kidneys, lungs, and livers were normal. Work is ongoing for completion of the remaining study aims. Conclusion: Toxicity was the major concern for the success of the proposed comprehensive combinational therapy. Data generated so far ruled out any acute or chronic toxic effects. Consequently, ongoing work is quite promising and may significantly contribute to the development of more effective immunotherapeutic strategies for the treatment of cancer patients.

Keywords: cancer immunotherapy, check-point blockade, combination therapy, melanoma

Procedia PDF Downloads 103
77 Molecular Signaling Involved in the 'Benzo(a)Pyrene' Induced Germ Cell DNA Damage and Apoptosis: Possible Protection by Natural Aryl Hydrocarbon Receptor Antagonist and Anti-Tumor Agent

Authors: Kuladip Jana

Abstract:

Benzo(a)pyrene [B(a)P] is an environmental toxicant present mostly in cigarette smoke and car exhaust, is an aryl hydrocarbon receptor (AhR) ligand that exerts its toxic effects on both male and female reproductive systems. In this study, the effect of B(a)P at different doses (0.1, 0.25, 0.5, 1 and 5 mg /kg body weight) was studied on male reproductive system of rat. A significant decrease in cauda epididymal sperm count and motility along with the presence of sperm head abnormalities and altered epididymal and testicular histology were documented following B(a)P treatment. B(a)P treatment resulted apoptotic sperm cells as observed by TUNEL and Annexin V-PI assay with increased ROS, altered sperm mitochondrial membrane potential (ΔΨm) with a simultaneous decrease in the activity of antioxidant enzymes and GSH status. TUNEL positive apoptotic cells also observed in testis as well as isolated germ and Leydig cells following B(a)P exposure. Western Blot analysis revealed the activation of p38MAPK, cytosolic translocation of cytochrome-c, up-regulation of Bax and inducible nitric oxide synthase (iNOS) with cleavage of PARP and down-regulation of BCl2 in testis upon B(a)P treatment. The protein and mRNA levels of testicular key steroidogenesis regulatory proteins like StAR, cytochrome P450 IIA1 (CYPIIA1), 3β HSD, 17β HSD showed a significant decrease in a dose dependent manner while an increase in the expression of cytochrome P450 1A1 (CYP1A1), Aryl hydrocarbon Receptor (AhR), active caspase- 9 and caspase- 3 following B(a)P exposure. We conclude that exposure of benzo(a)pyrene caused testicular gamatogenic and steroidogenic disorders by induction of oxidative stress, inhibition of StAR and other steroidogenic enzymes along with activation of p38MAPK and initiated caspase-3 mediated germ and Leydig cell apoptosis.The possible protective role of naturally occurring phytochemicals against B(a)P induced testicular toxicity needs immediate consideration. Curcumin and resveratrol separately were found to protect against B(a)P induced germ cell apoptosis, and their combinatorial effect was more significant. Our present study in isolated testicular germ cell population from adult male Wistar rats, highlighted their synergistic protective effect against B(a)P induced germ cell apoptosis. Curcumin-resveratrol co-treatment decreased the expression of pro-apoptotic proteins like cleaved caspase 3,8,9, cleaved PARP, Apaf1, FasL, tBid. Curcumin-resveratrol co-treatment decreased Bax/Bcl2 ratio, mitochondria to cytosolic translocation of cytochrome c and activated the survival protein Akt. Curcumin-resveratrol decreased the expression of p53 dependent apoptotic genes like Fas, FasL, Bax, Bcl2, Apaf1.Curcumin-resveratrol co-treatment thus prevented B(a)P induced germ cell apoptosis. B(a)P induced testicular ROS generation and oxidative stress were significantly ameliorated with curcumin and resveratrol. Curcumin-resveratrol co-treatment prevented B(a)P induced nuclear translocation of AhR and CYP1A1 production. The combinatorial treatment significantly inhibited B(a)P induced ERK 1/2, p38 MAPK and JNK 1/2 activation. B(a)P treatment increased the expression of p53 and its phosphorylation (p53 ser 15). Curcumin-resveratrol co-treatment significantly decreased p53 level and its phosphorylation (p53 ser 15). The study concludes that curcumin-resveratrol synergistically modulated MAPKs and p53, prevented oxidative stress, regulated the expression of pro and anti-apoptotic proteins as well as the proteins involved in B(a)P metabolism thus protected germ cells from B(a)P induced apoptosis.

Keywords: benzo(a)pyrene, germ cell, apoptosis, oxidative stress, resveratrol, curcumin

Procedia PDF Downloads 236
76 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica

Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat

Abstract:

Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.

Keywords: carmustine, silica, controlled, release

Procedia PDF Downloads 233
75 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 159
74 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 156
73 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience

Authors: Nancy Mackin

Abstract:

Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.

Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge

Procedia PDF Downloads 499
72 Ethnic Andean Concepts of Health and Illness in the Post-Colombian World and Its Relevance Today

Authors: Elizabeth J. Currie, Fernando Ortega Perez

Abstract:

—‘MEDICINE’ is a new project funded under the EC Horizon 2020 Marie-Sklodowska Curie Actions, to determine concepts of health and healing from a culturally specific indigenous context, using a framework of interdisciplinary methods which integrates archaeological-historical, ethnographic and modern health sciences approaches. The study will generate new theoretical and methodological approaches to model how peoples survive and adapt their traditional belief systems in a context of alien cultural impacts. In the immediate wake of the conquest of Peru by invading Spanish armies and ideology, native Andeans responded by forming the Taki Onkoy millenarian movement, which rejected European philosophical and ontological teachings, claiming “you make us sick”. The study explores how people’s experience of their world and their health beliefs within it, is fundamentally shaped by their inherent beliefs about the nature of being and identity in relation to the wider cosmos. Cultural and health belief systems and related rituals or behaviors sustain a people’s sense of identity, wellbeing and integrity. In the event of dislocation and persecution these may change into devolved forms, which eventually inter-relate with ‘modern’ biomedical systems of health in as yet unidentified ways. The development of new conceptual frameworks that model this process will greatly expand our understanding of how people survive and adapt in response to cultural trauma. It will also demonstrate the continuing role, relevance and use of TM in present-day indigenous communities. Studies will first be made of relevant pre-Colombian material culture, and then of early colonial period ethnohistorical texts which document the health beliefs and ritual practices still employed by indigenous Andean societies at the advent of the 17th century Jesuit campaigns of persecution - ‘Extirpación de las Idolatrías’. Core beliefs drawn from these baseline studies will then be used to construct a questionnaire about current health beliefs and practices to be taken into the study population of indigenous Quechua peoples in the northern Andean region of Ecuador. Their current systems of knowledge and medicine have evolved within complex historical contexts of both the conquest by invading Inca armies in the late 15th century, followed a generation later by Spain, into new forms. A new model will be developed of contemporary  Andean concepts of health, illness and healing demonstrating  the way these have changed through time. With this, a ‘policy tool’ will be constructed as a bridhging facility into contemporary global scenarios relevant to other Indigenous, First Nations, and migrant peoples to provide a means through which their traditional health beliefs and current needs may be more appropriately understood and met. This paper presents findings from the first analytical phases of the work based upon the study of the literature and the archaeological records. The study offers a novel perspective and methods in the development policies sensitive to indigenous and minority people’s health needs.

Keywords: Andean ethnomedicine, Andean health beliefs, health beliefs models, traditional medicine

Procedia PDF Downloads 327
71 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 50
70 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China

Authors: Liuhui Zhu, Peng Zeng

Abstract:

With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.

Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model

Procedia PDF Downloads 116
69 Hidden Wild Edible Agaric Wealth in North West India: Diversity and Domestication Studies

Authors: Munruchi Kaur

Abstract:

Agarics are the fruiting bodies of the fungi falling under Phylum Basidiomycota of class Agaricomycetes. North Western parts of India which comprises of mighty Himalayas decorated with snow cap mountains, forested areas, grassland and the Gangetic plains with the altitude varying between 196m to 3600m have a huge potential of naturally growing wild agarics. These mushrooms lavishly grow in wet humid weather conditions that prevail in these parts of India during the monsoon which hits in the early June and continue up to mid-October. In this area, a diverse form of mixed vegetation is available which is represented by coniferous and angiospermic trees, shrubs, herbs, epiphytes, parasites, climbers etc. The vegetation, topography and climate of this area is quite favorable for the growth of agarics. Cedrus deodara, Pinus longifolia, P. roxburghii, P. wallichiana, Abies pindrow, A. spectabilis, Picea smithiana, Taxus sp., Rhododendron sp. and Quercus sp. occur in pure formations or as scattered patches or as mixed forests, whereas the Gangetic plains are dominated by the angiospermic trees and shrubs, they commonly occur along roadsides or in conserved areas or are the avenues plantations, common amongst these are Shorea robusta, Dalbergia sissoo, Melia azadirachta, Acacia sp., Ficus benghalensis, Eucalyptus sp. and Butea monosperma. These agarics can be categorized on the basis of the habitat in which they grow they are usually foliocolous, lignicolous, humicolous, coprophilous or termitophilous. A number of fungal forays were undertaken to different parts of North West India from time to time during the monsoon season with an aim to decipher the agarics diversity of this part of India. Along with collecting the various agarics from diverse habitat, the ethnomycological data was also collected along with by interacting with the local inhabitants of those areas. Based upon the ethnomycological data collected over the years, cataloging of the edible and inedible agarics has been done and cultures of such potential edible agarics were raised with an aim to domesticate these selected taxa. With an aim to reduce the local pressure on these natural resources, a low-cost technology was developed to make it available to the public for cultivation. As a result, 104 taxa were found edible such as Amanita hemibapha var. ochracea, A. chepangiana, A. banningiana, A. vaginata, Agrocybe parasitica, Author: Professor & Dean Faculty of Life Sciences Punjabi University, Patiala. Punjab, India [email protected] Agaricus bisporus, A. andrewii, A. campestris var. campestris, A. silvicola, A. subrutilescens, A. bernardii, A. abruptibulbus, A. fuscovelatus, A. brunnescens, A. augustus, A. silvaticus, A. arvensis, Volvariella bakeri, V. terastia, V. bombycina, V. diplasia, Psathyrella candolleana, Volvopluteus gloiocephalus, Russula cyanoxantha, R. atropurpurea, R. aurea, Clitocybe gibba,Lentinus transitus, L. kashmirinus, L. crinitus, L. ligrinus, Lactarius rubrilacteus, Pleurotus sapidus, Pluteus subcervinus, Macrocybe gigantea, etc. Cultures of various taxa viz. Pleurotus sajor-caju, Macrocybe gigantea, Pluteus petasatus and Lentinus tigrinus were raised and a proper protocol for the domestication of Pleurotus sajor-caju, Macrocybe gigantea, and Lentinus tigrinus has been developed using the locally available agro-wastes.

Keywords: Agaric, culture, domestication, edible

Procedia PDF Downloads 47
68 A Study on the Relation among Primary Care Professionals Serving Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the country. The elevated death and disease among ex-slaves were attributable to the unavailability of healthcare. To address the paucity of healthcare services, the College, an institution with the mission of educating minority professionals and serving the under served population, was established in 1876. This study was designed to assess if the College has accomplished its mission of serving under served communities and contributed to the elimination of health disparities in the United States. The study objective was to quantify the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities, which, in turn, was significantly associated with a health professional shortage score partly designated by the U.S. Department of Health and Human Services. Various statistical methods were used to analyze the alumni data in years 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates into the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t test was performed to detect the significant mean differences for clustering and criterion variables between Disadvantaged and Non-disadvantaged Communities, which confirms the “content” validity of cluster analysis model. Chi-square test was used to assess if the proportion of cluster groups (Disadvantaged vs Non-disadvantaged Communities) were consistent with that of practicing specialties (primary care vs. non-primary care). Finally, the partial least squares (PLS) path model was constructed to explore the “construct” validity of analytics model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving disadvantaged community. The social ecological theory along with statistical models mentioned was used to establish the relationship between medical and dental graduates (primary care professionals serving disadvantaged communities) and their social environments (socioeconomic status, adverse health outcome, health professional shortage score). Based on social ecological framework, it was hypothesized that the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities could be quantified. Also, primary care professionals serving disadvantaged communities related to a health professional shortage score can be measured. Adverse health outcome (adult obesity rate, age-adjusted premature mortality rate, and percent of people diagnosed with diabetes) could be affected by the latent variable, namely socioeconomic status (unemployment rate, poverty rate, percent of children who were in free lunch programs, and percent of uninsured adults). The study results indicated that approximately 83% (3,192/3,864) of the College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. In addition, the PLS path modeling demonstrated that primary care professionals serving disadvantaged community was significantly associated with socioeconomic status and adverse health outcome (p < .001). In summary, the majority of medical and dental graduates from the College provide primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcomes, which demonstrate that the College has fulfilled its mission.

Keywords: disadvantaged community, K-means cluster analysis, PLS path modeling, primary care

Procedia PDF Downloads 527
67 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 131
66 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy

Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar

Abstract:

High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.

Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate

Procedia PDF Downloads 398
65 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures

Authors: James Forren

Abstract:

This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.

Keywords: augmented reality, cementitious composites, computational form finding, textile structures

Procedia PDF Downloads 149
64 Effect of Degree of Phosphorylation on Electrospinning and In vitro Cell Behavior of Phosphorylated Polymers as Biomimetic Materials for Tissue Engineering Applications

Authors: Pallab Datta, Jyotirmoy Chatterjee, Santanu Dhara

Abstract:

Over the past few years, phosphorous containing polymers have received widespread attention for applications such as high performance optical fibers, flame retardant materials, drug delivery and tissue engineering. Being pentavalent, phosphorous can exist in different chemical environments in these polymers which increase their versatility. In human biochemistry, phosphorous based compounds exert their functions both in soluble and insoluble form occurring as inorganic or as organophosphorous compounds. Specifically in case of biomacromolecules, phosphates are critical for functions of DNA, ATP, phosphoproteins, phospholipids, phosphoglycans and several coenzymes. Inspired by the role of phosphorous in functional biomacromolecules, design and synthesis of biomimetic materials are thus carried out by several authors to study macromolecular function or as substitutes in clinical tissue regeneration conditions. In addition, many regulatory signals of the body are controlled by phoshphorylation of key proteins present either in form of growth factors or matrix-bound scaffold proteins. This inspires works on synthesis of phospho-peptidomimetic amino acids for understanding key signaling pathways and this is extended to obtain molecules with potentially useful biological properties. Apart from above applications, phosphate groups bound to polymer backbones have also been demonstrated to improve function of osteoblast cells and augment performance of bone grafts. Despite the advantages of phosphate grafting, however, there is limited understanding on effect of degree of phosphorylation on macromolecular physicochemical and/or biological properties. Such investigations are necessary to effectively translate knowledge of macromolecular biochemistry into relevant clinical products since they directly influence processability of these polymers into suitable scaffold structures and control subsequent biological response. Amongst various techniques for fabrication of biomimetic scaffolds, nanofibrous scaffolds fabricated by electrospinning technique offer some special advantages in resembling the attributes of natural extracellular matrix. Understanding changes in physico-chemical properties of polymers as function of phosphorylation is therefore going to be crucial in development of nanofiber scaffolds based on phosphorylated polymers. The aim of the present work is to investigate the effect of phosphorous grafting on the electrospinning behavior of polymers with aim to obtain biomaterials for bone regeneration applications. For this purpose, phosphorylated derivatives of two polymers of widely different electrospinning behaviors were selected as starting materials. Poly(vinyl alcohol) is a conveniently electrospinnable polymer at different conditions and concentrations. On the other hand, electrospinning of chitosan backbone based polymers have been viewed as a critical challenge. The phosphorylated derivatives of these polymers were synthesized, characterized and electrospinning behavior of various solutions containing these derivatives was compared with electrospinning of pure poly (vinyl alcohol). In PVA, phosphorylation adversely impacted electrospinnability while in NMPC, higher phosphate content widened concentration range for nanofiber formation. Culture of MG-63 cells on electrospun nanofibers, revealed that degree of phosphate modification of a polymer significantly improves cell adhesion or osteoblast function of cultured cells. It is concluded that improvement of cell response parameters of nanofiber scaffolds can be attained as a function of controlled degree of phosphate grafting in polymeric biomaterials with implications for bone tissue engineering applications.

Keywords: bone regeneration, chitosan, electrospinning, phosphorylation

Procedia PDF Downloads 199
63 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 294
62 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 36
61 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 201
60 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 122
59 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools

Authors: Todd H. Buxton, Yong G. Lai

Abstract:

Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.

Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment

Procedia PDF Downloads 39