Search results for: Laurent series
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2632

Search results for: Laurent series

232 Safety Assessment of Traditional Ready-to-Eat Meat Products Vended at Retail Outlets in Kebbi and Sokoto States, Nigeria

Authors: M. I. Ribah, M. Jibir, Y. A. Bashar, S. S. Manga

Abstract:

Food safety is a significant and growing public health problem in the world and Nigeria as a developing country, since food-borne diseases are important contributors to the huge burden of sickness and death of humans. In Nigeria, traditional ready-to-eat meat products (RTE-MPs) like balangu, tsire, guru and dried meat products like kilishi, dambun nama, banda, were reported to be highly appreciated because of their eating qualities. The consumption of these products was considered as safe due to the treatments that are usually involved during their production process. However, during processing and handling, the products could be contaminated by pathogens that could cause food poisoning. Therefore, a hazard identification for pathogenic bacteria on some traditional RTE-MPs was conducted in Kebbi and Sokoto States, Nigeria. A total of 116 RTE-MPs (balangu-38, kilishi-39 and tsire-39) samples were obtained from retail outlets and analyzed using standard cultural microbiological procedures in general and selective enrichment media to isolate the target pathogens. A six-fold serial dilution was prepared and using the pour plating method, colonies were counted. Serial dilutions were selected based on the prepared pre-labeled Petri dishes for each sample. A volume of 10-12 ml of molten Nutrient agar cooled to 42-45°C was poured into each Petri dish and 1 ml each from dilutions of 102, 104 and 106 for every sample was respectively poured on a pre-labeled Petri plate after which colonies were counted. The isolated pathogens were identified and confirmed after series of biochemical tests. Frequencies and percentages were used to describe the presence of pathogens. The General Linear Model was used to analyze data on pathogen presence according to RTE-MPs and means were separated using the Tukey test at 0.05 confidence level. Of the 116 RTE-MPs samples collected, 35 (30.17%) samples were found to be contaminated with some tested pathogens. Prevalence results showed that Escherichia coli, salmonella and Staphylococcus aureus were present in the samples. Mean total bacterial count was 23.82×106 cfu/g. The frequency of individual pathogens isolated was; Staphylococcus aureus 18 (15.51%), Escherichia coli 12 (10.34%) and Salmonella 5 (4.31%). Also, among the RTE-MPs tested, the total bacterial counts were found to differ significantly (P < 0.05), with 1.81, 2.41 and 2.9×104 cfu/g for tsire, kilishi, and balangu, respectively. The study concluded that the presence of pathogenic bacteria in balangu could pose grave health risks to consumers, and hence, recommended good manufacturing practices in the production of balangu to improve the products’ safety.

Keywords: ready-to-eat meat products, retail outlets, public health, safety assessment

Procedia PDF Downloads 103
231 Synthesis, Molecular Modeling and Study of 2-Substituted-4-(Benzo[D][1,3]Dioxol-5-Yl)-6-Phenylpyridazin-3(2H)-One Derivatives as Potential Analgesic and Anti-Inflammatory Agents

Authors: Jyoti Singh, Ranju Bansal

Abstract:

Fighting pain and inflammation is a common problem faced by physicians while dealing with a wide variety of diseases. Since ancient time nonsteroidal anti-inflammatory agents (NSAIDs) and opioids have been the cornerstone of treatment therapy, however, the usefulness of both these classes is limited due to severe side effects. NSAIDs, which are mainly used to treat mild to moderate inflammatory pain, induce gastric irritation and nephrotoxicity whereas opioids show an array of adverse reactions such as respiratory depression, sedation, and constipation. Moreover, repeated administration of these drugs induces tolerance to the analgesic effects and physical dependence. Further discovery of selective COX-2 inhibitors (coxibs) suggested safety without any ulcerogenic side effects; however, long-term use of these drugs resulted in kidney and hepatic toxicity along with an increased risk of secondary cardiovascular effects. The basic approaches towards inflammation and pain treatment are constantly changing, and researchers are continuously trying to develop safer and effective anti-inflammatory drug candidates for the treatment of different inflammatory conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, psoriasis and multiple sclerosis. Synthetic 3(2H)-pyridazinones constitute an important scaffold for drug discovery. Structure-activity relationship studies on pyridazinones have shown that attachment of a lactam at N-2 of the pyridazinone ring through a methylene spacer results in significantly increased anti-inflammatory and analgesic properties of the derivatives. Further introduction of the heterocyclic ring at lactam nitrogen results in improvement of biological activities. Keeping in mind these SAR studies, a new series of compounds were synthesized as shown in scheme 1 and investigated for anti-inflammatory, analgesic, anti-platelet activities and docking studies. The structures of newly synthesized compounds have been established by various spectroscopic techniques. All the synthesized pyridazinone derivatives exhibited potent anti-inflammatory and analgesic activity. Homoveratryl substituted derivative was found to possess highest anti-inflammatory and analgesic activity displaying 73.60 % inhibition of edema at 40 mg/kg with no ulcerogenic activity when compared to standard drugs indomethacin. Moreover, 2-substituted-4-benzo[d][1,3]dioxole-6-phenylpyridazin-3(2H)-ones derivatives did not produce significant changes in bleeding time and emerged as safe agents. Molecular docking studies also illustrated good binding interactions at the active site of the cyclooxygenase-2 (hCox-2) enzyme.

Keywords: anti-inflammatory, analgesic, pyridazin-3(2H)-one, selective COX-2 inhibitors

Procedia PDF Downloads 174
230 MXene Mediated Layered 2D-3D-2D g-C3N4@WO3@Ti3C2 Multijunctional Heterostructure with Enhanced Photoelectrochemical and Photocatalytic Properties

Authors: Lekgowa Collen Makola, Cecil Naphtaly Moro Ouma, Sharon Moeno, Langelihle Dlamini

Abstract:

In recent years, advancement in the field of nanotechnology has evolved new strategies to address energy and environmental issues. Amongst the developing technologies, visible-light-driven photocatalysis is regarded as a sustainable approach for energy production and environmental detoxifications, where transition metal oxides (TMOs) and metal-free carbon-based semiconductors such as graphitic carbon nitride (CN) evidenced notable potential in this matter. Herein, g-C₃N₄@WO₃@Ti₃C₂Tx three-component multijunction photocatalyst was fabricated via facile ultrasonic-assisted self-assembly, followed by calcination to facilitate extensive integrations of the materials. A series of different Ti₃C₂ wt% loading in the g-C₃N4@WO₃@Ti₃C₂Tx were prepared and represented as 1-CWT, 3-CWT, 5-CWT, and 7-CWT corresponding to 1, 3, 5, and 7wt%, respectively. Systematic characterization using spectroscopic and microscopic techniques were employed to validate the successful preparation of the photocatalysts. Enhanced optoelectronic and photoelectrochemical properties were observed for the WO₃@Ti₃C2@g-C₃N4 heterostructure with respect to the individual materials. Photoluminescence spectra and Nyquist plots show restrained recombination rates and improved photocarrier conductivities, respectively, and this was credited to the synergistic coupling effect and the presence of highly conductive Ti₃C2 MXene. The strong interfacial contact surfaces upon the formation of the composite were confirmed using XPS. Multiple charge transfer mechanisms were proposed for the WO3@Ti3C₂@g-C3N4, which couples Z-scheme and Schottky-junction mediated with Ti3C2 MXene. Bode phase plots show improved charge carrier life-times upon the formation of the multijunctional photocatalyst. Moreover, transient photocurrent density of 7-CWT is 40 and seven (7) times higher compared to that of g-C₃N4 and WO3, correspondingly. Unlike in the traditional Z-Scheme, the formed ternary heterostructure possesses interfaces through the metallic 2D Ti₃C₂ MXene, which provided charge transfer channels for efficient photocarrier transfers with carrier concentrations (ND) of 17.49×1021 cm-3 and 4.86% photo-to-chemical conversion efficiency. The as-prepared ternary g-C₃N₄@WO₃@Ti₃C₂Tx exhibited excellent photoelectrochemical properties with reserved redox band potential potencies to facilitate efficient photo-oxidation and -reduction reactions. The fabricated multijunction photocatalyst exhibits potentials to be used in an extensive range of photocatalytic process vis., production of valuable hydrocarbons from CO₂, production of H₂, and degradation of a plethora of pollutants from wastewater.

Keywords: photocatalysis, Z-scheme, multijunction heterostructure, Ti₃C₂ MXene, g-C₃N₄

Procedia PDF Downloads 85
229 Assumption of Cognitive Goals in Science Learning

Authors: Mihail Calalb

Abstract:

The aim of this research is to identify ways for achieving sustainable conceptual understanding within science lessons. For this purpose, a set of teaching and learning strategies, parts of the theory of visible teaching and learning (VTL), is studied. As a result, a new didactic approach named "learning by being" is proposed and its correlation with educational paradigms existing nowadays in science teaching domain is analysed. In the context of VTL the author describes the main strategies of "learning by being" such as guided self-scaffolding, structuring of information, and recurrent use of previous knowledge or help seeking. Due to the synergy effect of these learning strategies applied simultaneously in class, the impact factor of learning by being on cognitive achievement of students is up to 93 % (the benchmark level is equal to 40% when an experienced teacher applies permanently the same conventional strategy during two academic years). The key idea in "learning by being" is the assumption by the student of cognitive goals. From this perspective, the article discusses the role of student’s personal learning effort within several teaching strategies employed in VTL. The research results emphasize that three mandatory student – related moments are present in each constructivist teaching approach: a) students’ personal learning effort, b) student – teacher mutual feedback and c) metacognition. Thus, a successful educational strategy will target to achieve an involvement degree of students into the class process as high as possible in order to make them not only know the learning objectives but also to assume them. In this way, we come to the ownership of cognitive goals or students’ deep intrinsic motivation. A series of approaches are inherent to the students’ ownership of cognitive goals: independent research (with an impact factor on cognitive achievement equal to 83% according to the results of VTL); knowledge of success criteria (impact factor – 113%); ability to reveal similarities and patterns (impact factor – 132%). Although it is generally accepted that the school is a public service, nonetheless it does not belong to entertainment industry and in most of cases the education declared as student – centered actually hides the central role of the teacher. Even if there is a proliferation of constructivist concepts, mainly at the level of science education research, we have to underline that conventional or frontal teaching, would never disappear. Research results show that no modern method can replace an experienced teacher with strong pedagogical content knowledge. Such a teacher will inspire and motivate his/her students to love and learn physics. The teacher is precisely the condensation point for an efficient didactic strategy – be it constructivist or conventional. In this way, we could speak about "hybridized teaching" where both the student and the teacher have their share of responsibility. In conclusion, the core of "learning by being" approach is guided learning effort that corresponds to the notion of teacher–student harmonic oscillator, when both things – guidance from teacher and student’s effort – are equally important.

Keywords: conceptual understanding, learning by being, ownership of cognitive goals, science learning

Procedia PDF Downloads 144
228 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 110
227 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review

Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed

Abstract:

Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.

Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy

Procedia PDF Downloads 104
226 Privacy Rights of Children in the Social Media Sphere: The Benefits and Challenges Under the EU and US Legislative Framework

Authors: Anna Citterbergova

Abstract:

This study explores the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, namely the GDPR (2018) and COPPA (2000). Considering that children are online for the majority of their free time, one cannot overlook the negative side effects that may be associated with online participation, which may put children’s wellbeing and their fundamental rights at risk. The question of whether the current relevant legislative framework in relation to the responsibilities of the internet service providers (ISPs) are adequate safeguards and guarantees to children’s personal data protection has been an evolving debate both in the US and in the EU. From a children’s rights perspective, processors of personal data have certain obligations that must meet the international human rights principles (e. g. the CRC, ECHR), which require taking into account the best interest of the child. Accordingly, the need to protect children’s privacy online remains strong and relevant with the expansion of the number and importance of social media platforms to human life. At the same time, the landscape of the internet is rapidly evolving, and commercial interests are taking a more targeted approach in seeking children’s data. Therefore, it is essential to constantly evaluate the ongoing and evolving newly adopted market policies of ISPs that may misuse the gap in the current letter of the law. Previous studies in the field have already pointed out that both GDPR and COPPA may theoretically not be sufficient in protecting children’s personal data. With the focus on social media platforms, this study uses the doctrinal-descriptive method to identifiy the mechanisms enshrined in the GDPR and COPPA designed to protect children’s personal data. In its second part, the study includes a data gathering phase by the national data protection authorities responsible for monitoring and supervision of the GDPR in relation to children’s personal data protection who monitor the enforcement of the data protection rules throughout the European Union an contribute to their consistent application. These gathered primary source of data will later be used to outline the series of benefits and challenges to children’s persona lata protection faced by these institutes and the analysis that aims to suggest if and/or how to hold ISPs accountable while striking a fair balance between the commercial rights and the right to protection of the personal data of children. The preliminary results can be divided into two categories. First, conclusions in the doctrinal-descriptive part of the study. Second, specific cases and situations from the practice of national data protection authorities. While for the first part, concrete conclusions can already be presented, the second part is currently still in the data gathering phase. The result of this research is a comprehensive analysis on the safeguards and guarantees to children’s personal data protection under the current EU and US legislative framework, based on doctrinal-descriptive approach and original empirical data.

Keywords: personal data of children, personal data protection, GDPR, COPPA, ISPs, social media

Procedia PDF Downloads 64
225 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test

Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi

Abstract:

Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.

Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete

Procedia PDF Downloads 124
224 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 198
223 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia

Authors: Marwa Djebbi, Hakim Gabtni

Abstract:

Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.

Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation

Procedia PDF Downloads 269
222 Large-Scale Production of High-Performance Fiber-Metal-Laminates by Prepreg-Press-Technology

Authors: Christian Lauter, Corin Reuter, Shuang Wu, Thomas Troester

Abstract:

Lightweight construction became more and more important over the last decades in several applications, e.g. in the automotive or aircraft sector. This is the result of economic and ecological constraints on the one hand and increasing safety and comfort requirements on the other hand. In the field of lightweight design, different approaches are used due to specific requirements towards the technical systems. The use of endless carbon fiber reinforced plastics (CFRP) offers the largest weight saving potential of sometimes more than 50% compared to conventional metal-constructions. However, there are very limited industrial applications because of the cost-intensive manufacturing of the fibers and production technologies. Other disadvantages of pure CFRP-structures affect the quality control or the damage resistance. One approach to meet these challenges is hybrid materials. This means CFRP and sheet metal are combined on a material level. Therefore, new opportunities for innovative process routes are realizable. Hybrid lightweight design results in lower costs due to an optimized material utilization and the possibility to integrate the structures in already existing production processes of automobile manufacturers. In recent and current research, the advantages of two-layered hybrid materials have been pointed out, i.e. the possibility to realize structures with tailored mechanical properties or to divide the curing cycle of the epoxy resin into two steps. Current research work at the Chair for Automotive Lightweight Design (LiA) at the Paderborn University focusses on production processes for fiber-metal-laminates. The aim of this work is the development and qualification of a large-scale production process for high-performance fiber-metal-laminates (FML) for industrial applications in the automotive or aircraft sector. Therefore, the prepreg-press-technology is used, in which pre-impregnated carbon fibers and sheet metals are formed and cured in a closed, heated mold. The investigations focus e.g. on the realization of short process chains and cycle times, on the reduction of time-consuming manual process steps, and the reduction of material costs. This paper gives an overview over the considerable steps of the production process in the beginning. Afterwards experimental results are discussed. This part concentrates on the influence of different process parameters on the mechanical properties, the laminate quality and the identification of process limits. Concluding the advantages of this technology compared to conventional FML-production-processes and other lightweight design approaches are carried out.

Keywords: composite material, fiber-metal-laminate, lightweight construction, prepreg-press-technology, large-series production

Procedia PDF Downloads 213
221 Model Tests on Geogrid-Reinforced Sand-Filled Embankments with a Cover Layer under Cyclic Loading

Authors: Ma Yuan, Zhang Mengxi, Akbar Javadi, Chen Longqing

Abstract:

The structure of sand-filled embankment with cover layer is treated with tipping clay modified with lime on the outside of the packing, and the geotextile is placed between the stuffing and the clay. The packing is usually river sand, and the improved clay protects the sand core against rainwater erosion. The sand-filled embankment with cover layer has practical problems such as high filling embankment, construction restriction, and steep slope. The reinforcement can be applied to the sand-filled embankment with cover layer to solve the complicated problems such as irregular settlement caused by poor stability of the embankment. At present, the research on the sand-filled embankment with cover layer mainly focuses on the sand properties, construction technology, and slope stability, and there are few studies in the experimental field, the deformation characteristics and stability of reinforced sand-filled embankment need further study. In addition, experimental research is relatively rare when the cyclic load is considered in tests. A subgrade structure of geogrid-reinforced sand-filled embankment with cover layer was proposed. The mechanical characteristics, the deformation properties, reinforced behavior and the ultimate bearing capacity of the embankment structure under cyclic loading were studied. For this structure, the geogrids in the sand and the tipping soil are through the geotextile which is arranged in sections continuously so that the geogrids can cross horizontally. Then, the Unsaturated/saturated Soil Triaxial Test System of Geotechnical Consulting and Testing Systems (GCTS), USA was modified to form the loading device of this test, and strain collector was used to measuring deformation and earth pressure of the embankment. A series of cyclic loading model tests were conducted on the geogrid-reinforced sand-filled embankment with a cover layer under a different number of reinforcement layers, the length of reinforcement and thickness of the cover layer. The settlement of the embankment, the normal cumulative deformation of the slope and the earth pressure were studied under different conditions. Besides cyclic loading model tests, model experiments of embankment subjected cyclic-static loading was carried out to analyze ultimate bearing capacity with different loading. The experiment results showed that the vertical cumulative settlement under long-term cyclic loading increases with the decrease of the number of reinforcement layers, length of the reinforcement arrangement and thickness of the tipping soil. Meanwhile, these three factors also have an influence on the decrease of the normal deformation of the embankment slope. The earth pressure around the loading point is significantly affected by putting geogrid in a model embankment. After cyclic loading, the decline of ultimate bearing capacity of the reinforced embankment can be effectively reduced, which is contrary to the unreinforced embankment.

Keywords: cyclic load; geogrid; reinforcement behavior; cumulative deformation; earth pressure

Procedia PDF Downloads 89
220 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 167
219 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 233
218 The Role of the Corporate Social Responsibility in Poverty Reduction

Authors: M. Verde, G. Falzarano

Abstract:

The paper examines the connection between corporate social responsibility (CSR), capability approach and poverty reduction; in particular, the local employment development (LED) by way of CSR initiatives. The joint action of LED/CSR results in a win-win situation, not only for the enterprises but also for all the stakeholders involved; in this regard, subsidiarity and coordination between national and regional/local authorities are central to a socially-oriented market economy. In the first section, the CSR is analysed on the basis of its social function in the fight against poverty, as a 'capabilities deprivation'. In the central part, the attention is focused on the relationship between CSR and LED; ergo, on the role of the enterprises in fostering capabilities development (the employment). Besides, all the potential solutions are presented, stressing the possible combinations, in the last part. The benchmark is the enterprise as an economic and a social institution: the business should not be combined with profit merely, paying more attention to its sustainable impact and social contribution. In which way could it be possible? The answer is the CSR. The impact of CSR on poverty reduction is still little explored. The companies help to reduce poverty through economic contribution, human rights and social inclusion; hence, the business becomes an 'agent of development' in order to fight against 'inequality'. The starting point is the pyramid of social responsibility, where ethic and philanthropic responsibilities involve programmes and actions aimed at personal development of the individuals, improving human standard of living in all forms, including poverty, when people do not have a choice between different 'life options', ranging from level of education to employment. At this point, CSR comes into play and works on two dimensions: poverty reduction and poverty prevention, by means of a series of initiatives: first of all, job creation and precarious work reduction. Empowerment of the local actors, financial support and combination of top down and bottom up initiatives are some of CSR areas of activity. Several positive effects occur on individual levels of educations, access to capital, individual health status, empowerment of youth and woman, access to social networks and it was observed that these effects depend on the type of CSR strategy. Indeed, CSR programmes should take into account fundamental criteria, such as the transparency, the information about benefits, a coordination unit among institutions and more clear guidelines. In this way, the advantages to the corporate reputation and to the community translate into a better job matching on the labour market, inter alia. It is important to underline that the success depends on the specific measures of the areas in question, by adapting them to the local needs, in light of general principles and index; therefore, the concrete commitment of the all stakeholders involved is decisive in order to achieve the goals. The enterprise would represent a concrete contribution for the pursuit of sustainable development and for the dissemination of a social and well being awareness.

Keywords: capability approach, local employment development, poverty, social inclusion

Procedia PDF Downloads 107
217 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 118
216 Structure and Properties of Intermetallic NiAl-Based Coatings Produced by Magnetron Sputtering Technique

Authors: Tatiana S. Ogneva

Abstract:

Aluminum and nickel-based intermetallic compounds have attracted the attention of scientific community as promising materials for heat-resistant and wear-resistant coatings in such manufacturing areas as microelectronics, aircraft and rocket building and chemical industries. Magnetron sputtering makes possible to coat materials without formation of liquid phase and improves the mechanical and functional properties of nickel aluminides due to the possibility of nanoscale structure formation. The purpose of the study is the investigation of structure and properties of intermetallic coatings produced by magnetron sputtering technique. The feature of this work is the using of composite targets for sputtering, which were consisted of two semicircular sectors of cp-Ni and cp-Al. Plates of alumina, silicon, titanium and steel alloys were used as substrates. To estimate sputtering conditions on structure of intermetallic coatings, a series of samples were produced and studied in detail using scanning and transition electron microcopy and X-Ray diffraction. Besides, nanohardness and scratching tests were carried out. The varying parameters were the distance from the substrate to the target, the duration and the power of the sputtering. The thickness of the obtained intermetallic coatings varied from 0.05 to 0.5 mm depending on the sputtering conditions. The X-ray diffraction data indicated that the formation of intermetallic compounds occurred after sputtering without additional heat treatment. Sputtering at a distance not closer than 120 mm led to the formation of NiAl phase. Increase in the power of magnetron from 300 to 900 W promoted the increase of heterogeneity of the phase composition and the appearance of intermetallic phases NiAl, Ni₂Al₃, NiAl₃, and Al under the aluminum side, and NiAl, Ni₃Al, and Ni under the nickel side of the target. A similar trend is observed with increasing the distance of sputtering from 100 to 60 mm. The change in the phase composition correlates with the changing of the atomic composition of the coatings. Scanning electron microscopy revealed that the coatings have a nanoscale grain structure. In this case, the substrate material and the distance from the substrate to the magnetron have a significant effect on the structure formation process. The size of nanograins differs from 10 to 83 nm and depends not only on the sputtering modes but also on material of a substrate. Nanostructure of the material influences the level of mechanical properties. The highest level of nanohardness of the coatings deposited during 30 minutes on metallic substrates at a distance of 100 mm reached 12 GPa. It was shown that nanohardness depends on the grain size of the intermetallic compound. Scratching tests of the coatings showed a high level of adhesion of the coating to substrate without any delamination and cracking. The results of the study showed that magnetron sputtering of composite targets consisting of nickel and aluminum semicircles makes it possible to form intermetallic coatings with good mechanical properties directly in the process of sputtering without additional heat treatment.

Keywords: intermetallic coatings, magnetron sputtering, mechanical properties, structure

Procedia PDF Downloads 99
215 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 366
214 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes

Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen

Abstract:

Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.

Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH

Procedia PDF Downloads 210
213 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 217
212 An Analysis of Teacher Knowledge of Recognizing and Addressing the Needs of Traumatized Students

Authors: Tiffany Hollis

Abstract:

Childhood trauma is well documented in mental health research, yet has received little attention in urban schools. Child trauma affects brain development and impacts cognitive, emotional, and behavioral functioning. When educators understand that some of the behaviors that appear to be aggressive in nature might be the result of a hidden diagnosis of trauma, learning can take place, and the child can thrive in the classroom setting. Traumatized children, however, do not fit neatly into any single ‘box.’ Although many children enter school each day carrying with them the experience of exposure to violence in the home, the symptoms of their trauma can be multifaceted and complex, requiring individualized therapeutic attention. The purpose of this study was to examine how prepared educators are to address the unique challenges facing children who experience trauma. Given the vast number of traumatized children in our society, it is evident that our education system must investigate ways to create an optimal learning environment that accounts for trauma, addresses its impact on cognitive and behavioral development, and facilitates mental and emotional health and well-being. The researcher describes the knowledge, attitudes, dispositions, and skills relating to trauma-informed knowledge of induction level teachers in a diverse middle school. The data for this study were collected through interviews with teachers, who are in the induction phase (the first three years of their teaching career). The study findings paint a clear picture of how ill-prepared educators are to address the needs of students who have experienced trauma and the implications for the development of a professional development workshop or series of workshops that train teachers how to recognize and address and respond to the needs of students. The study shows how teachers often lack skills to meet the needs of students who have experienced trauma. Traumatized children regularly carry a heavy weight on their shoulders. Children who have experienced trauma may feel that the world is filled with unresponsive, threatening adults, and peers. Despite this, supportive interventions can provide traumatized children with places to go that are safe, stimulating, and even fun. Schools offer an environment that potentially meets these requirements by creating safe spaces where students can feel at ease and have fun while also learning via stimulating educational activities. This study highlights the lack of preparedness of educators to address the academic, behavioral, and cognitive needs of students who have experienced trauma. These findings provide implications for the creation of a professional development workshop that addresses how to recognize and address the needs of students who have experienced some type of trauma. They also provide implications for future research with a focus on specific interventions that enable the creation of optimal learning environments where students who have experienced trauma and all students can succeed, regardless of their life experiences.

Keywords: educator preparation, induction educators, professional development, trauma-informed

Procedia PDF Downloads 100
211 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 285
210 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 94
209 Coastal Modelling Studies for Jumeirah First Beach Stabilization

Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal

Abstract:

Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.

Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix

Procedia PDF Downloads 217
208 Preparation, Solid State Characterization of Etraverine Co-Crystals with Improved Solubility for the Treatment of Human Immunodeficiency Virus

Authors: B. S. Muddukrishna, Karthik Aithal, Aravind Pai

Abstract:

Introduction: Preparation of binary cocrystals of Etraverine (ETR) by using Tartaric Acid (TAR) as a conformer was the main focus of this study. Etravirine is a Class IV drug, as per the BCS classification system. Methods: Cocrystals were prepared by slow evaporation technique. A mixture of total 500mg of ETR: TAR was weighed in molar ratios of 1:1 (371.72mg of ETR and 128.27mg of TAR). Saturated solution of Etravirine was prepared in Acetone: Methanol (50:50) mixture in which tartaric acid is dissolved by sonication and then this solution was stirred using a magnetic stirrer until the solvent got evaporated. Shimadzu FTIR – 8300 system was used to acquire the FTIR spectra of the cocrystals prepared. Shimadzu thermal analyzer was used to achieve DSC measurements. X-ray diffractometer was used to obtain the X-ray powder diffraction pattern. Shake flask method was used to determine the equilibrium dynamic solubility of pure, physical mixture and cocrystals of ETR. USP buffer (pH 6.8) containing 1% of Tween 80 was used as the medium. The pure, physical mixture and the optimized cocrystal of ETR were accurately weighed sufficient to maintain the sink condition and were filled in hard gelatine capsules (size 4). Electrolab-Tablet Dissolution tester using basket apparatus at a rotational speed of 50 rpm and USP phosphate buffer (900 mL, pH = 6.8, 37 ˚C) + 1% Tween80 as a media, was used to carry out dissolution. Shimadzu LC-10 series chromatographic system was used to perform the analysis with PDA detector. An Hypersil BDS C18 (150mm ×4.6 mm ×5 µm) column was used for separation with mobile phase comprising of a mixture of ace¬tonitrile and phosphate buffer 20mM, pH 3.2 in the ratio 60:40 v/v. The flow rate was 1.0mL/min and column temperature was set to 30°C. The detection was carried out at 304 nm for ETR. Results and discussions: The cocrystals were subjected to various solid state characterization and the results confirmed the formation of cocrystals. The C=O stretching vibration (1741cm-1) in tartaric acid was disappeared in the cocrystal and the peak broadening of primary amine indicates hydrogen bond formation. The difference in the melting point of cocrystals when compared to pure Etravirine (265 °C) indicates interaction between the drug and the coformer which proves that first ordered transformation i.e. melting endotherm has disappeared. The difference in 2θ values of pure drug and cocrystals indicates the interaction between the drug and the coformer. Dynamic solubility and dissolution studies were also conducted by shake flask method and USP apparatus one respectively and 3.6 fold increase in the dynamic solubility were observed and in-vitro dissolution study shows four fold increase in the solubility for the ETR: TAR (1:1) cocrystals. The ETR: TAR (1:1) cocrystals shows improved solubility and dissolution as compared to the pure drug which was clearly showed by solid state characterization and dissolution studies.

Keywords: dynamic solubility, Etraverine, in vitro dissolution, slurry method

Procedia PDF Downloads 313
207 Biocompatibility assessment of different origin Barrier Membranes for Guided Bone Regeneration

Authors: Antonio Munar-Frau, Sascha Klismoch, Manfred Schmolz, Federico Hernandez-Alfaro, Jordi Caballe-Serrano

Abstract:

Introduction: Biocompatibility of biomaterials has been proposed as one of the main criteria for treatment success. For guided bone regeneration (GBR), barrier membranes present a conflict given the number of origins and modifications of these materials. The biologic response to biomaterials is orchestrated by a series of events leading to the integration or rejection of the biomaterial, posing questions such as if a longer occlusive property may trigger an inflammatory reaction. Whole blood cultures are a solution to study the immune response to drugs or biomaterials during the first 24-48 hours. The aim of this study is to determine the early immune response of different origins and chemical modifications of barrier membranes. Materials & Methods: 5 different widely used barrier membranes were included in this study: Acellular dermal matrix (AlloDerm, LifeCell®), Porcine Peritoneum (BioGide, Geistlich Pharma®), Porcine Pericardium (Jason, Botiss Biomaterials GmbH®), Porcine Cross-linked collagen (Ossix Plus, Datum Dental®) and d-PTFE (Cytoplast TXT, Osteogenics Biomedical®). Blood samples were extracted from 3 different healthy donors and incubated with the different samples of barrier membranes for 24 hours. After the incubation time, serum samples were obtained and analyzed by means of biocompatibility assays taking into account 42 markers. Results: In an early stage of the inflammatory response, the Acellular dermal matrix, porcine peritoneum and porcine cross-linked collagen expressed similar patterns of cytokine expression with a great manifestation of ENA 78. Porcine pericardium and d-PTFE presented similar cytokine activation, especially for MMP-3 and MMP-9, although other cytokines were highlighted with lower expression. For the later immune response, Porcine peritoneum and acellular dermal matrix MCP-1 and IL-15 were evident. Porcine pericardium, porcine cross-linked collagen and d-PTFE presented a high expression of IL-16 and lower manifestation of other cytokines. Different behaviors depending on an earlier or later stage of the inflammation process were observed. Barrier membrane inflammatory expression does not only differ depending on the origin, variables such as treatment of the collagen and polymers may also have a great impact on the cytokine expression of the studied barrier membranes during inflammation. Conclusions: Surface treatment and modifications might affect the biocompatibility of the membranes, as different cytokine expressions were evidently depending on the origin of the biomaterial. This study is only a brushstroke regarding the biocompatibility of materials, as it is one of the pioneer studies for ex vivo barrier membranes assays. Studies regarding surface modification are needed in order to clarify mystifications of barrier membrane science.

Keywords: biomaterials, bone regeneration, biocompatibility, inflammation

Procedia PDF Downloads 134
206 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements

Authors: Mohammad R. Bhuyan, Mohammad J. Khattak

Abstract:

Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.

Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement

Procedia PDF Downloads 138
205 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 158
204 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry

Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay

Abstract:

The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.

Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers

Procedia PDF Downloads 386
203 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction

Authors: Maria Angela Bedini, Fabio Bronzini

Abstract:

The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.

Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy

Procedia PDF Downloads 99