Search results for: advanced%20conversion%20technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2154

Search results for: advanced%20conversion%20technologies

414 Children’s Perception of Conversational Agents and Their Attention When Learning from Dialogic TV

Authors: Katherine Karayianis

Abstract:

Children with Attention Deficit Hyperactivity Disorder (ADHD) have trouble learning in traditional classrooms. These children miss out on important developmental opportunities in school, which leads to challenges starting in early childhood, and these problems persist throughout their adult lives. Despite receiving supplemental support in school, children with ADHD still perform below their non-ADHD peers. Thus, there is a great need to find better ways of facilitating learning in children with ADHD. Evidence has shown that children with ADHD learn best through interactive engagement, but this is not always possible in schools, given classroom restraints and the large student-to-teacher ratio. Redesigning classrooms may not be feasible, so informal learning opportunities provide a possible alternative. One popular informal learning opportunity is educational TV shows like Sesame Street. These types of educational shows can teach children foundational skills taught in pre-K and early elementary school. One downside to these shows is the lack of interactive dialogue between the TV characters and the child viewers. Pseudo-interaction is often deployed, but the benefits are limited if the characters can neither understand nor contingently respond to the child. AI technology has become extremely advanced and is now popular in many electronic devices that both children and adults have access to. AI has been successfully used to create interactive dialogue in children’s educational TV shows, and results show that this enhances children’s learning and engagement, especially when children perceive the character as a reliable teacher. It is likely that children with ADHD, whose minds may otherwise wander, may especially benefit from this type of interactive technology, possibly to a greater extent depending on their perception of the animated dialogic agent. To investigate this issue, I have begun examining the moderating role of inattention among children’s learning from an educational TV show with different types of dialogic interactions. Preliminary results have shown that when character interactions are neither immediate nor accurate, children who are more easily distracted will have greater difficulty learning from the show, but contingent interactions with a TV character seem to buffer these negative effects of distractibility by keeping the child engaged. To extend this line of work, the moderating role of the child’s perception of the dialogic agent as a reliable teacher will be examined in the association between children’s attention and the type of dialogic interaction in the TV show. As such, the current study will investigate this moderated moderation.

Keywords: attention, dialogic TV, informal learning, educational TV, perception of teacher

Procedia PDF Downloads 51
413 Nanostructured Multi-Responsive Coatings for Tuning Surface Properties

Authors: Suzanne Giasson, Alberto Guerron

Abstract:

Stimuli-responsive polymer coatings can be used as functional elements in nanotechnologies, such as valves in microfluidic devices, as membranes in biomedical engineering, as substrates for the culture of biological tissues or in developing nanomaterials for targeted therapies in different diseases. However, such coatings usually suffer from major shortcomings, such as a lack of selectivity and poor environmental stability. The study will present multi-responsive hierarchical and hybrid polymer-based coatings aiming to overcome some of these limitations. Hierarchical polymer coatings, consisting of two-dimensional arrays of thermo-responsive cationic PNIPAM-based microgels and surface-functionalized with non-responsive or pH-responsive polymers, were covalently grafted to substrates to tune the surface chemistry and the elasticity of the surface independently using different stimuli. The characteristic dimensions (i.e., layer thickness) and surface properties (i.e., adhesion, friction) of the microgel coatings were assessed using the Surface Forces Apparatus. The ability to independently control the swelling and surface properties using temperature and pH as triggers were investigated for microgels in aqueous suspension and microgels immobilized on substrates. Polymer chain grafting did not impede the ability of cationic PNIPAM microgels to undergo a volume phase transition above the VPTT, either in suspension or immobilized on a substrate. Due to the presence of amino groups throughout the entirety of the microgel polymer network, the swelling behavior was also pH dependent. However, the thermo-responsive swelling was more significant than the pH-triggered one. The microgels functionalized with PEG exhibited the most promising behavior. Indeed, the thermo-triggered swelling of microgel-co-PEG did not give rise to changes in the microgel surface properties (i.e., surface potential and adhesion) within a wide range of pH values. It was possible for the immobilized microgel-co-PEG to undergo a volume transition (swelling/shrinking) with no change in adhesion, suggesting that the surface of the thermal-responsive microgels remains rather hydrophilic above the VPTT. This work confirms the possibility of tuning the swelling behavior of microgels without changing the adhesive properties. Responsive surfaces whose swelling properties can be reversibly and externally altered over space and time regardless of the surface chemistry are very innovative and will enable revolutionary advances in technologies, particularly in biomedical surface engineering and microfluidics, where advanced assembly of functional components is increasingly required.

Keywords: responsive materials, polymers, surfaces, cell culture

Procedia PDF Downloads 51
412 Advancing Microstructure Evolution in Tungsten Through Rolling in Laser Powder Bed Fusion

Authors: Narges Shayesteh Moghaddam

Abstract:

Tungsten (W), a refractory metal known for its remarkably high melting temperature, offers tremendous potential for use in challenging environments prevalent in sectors such as space exploration, defense, and nuclear industries. Additive manufacturing, especially the Laser Powder-Bed Fusion (LPBF) technique, emerges as a beneficial method for fabricating tungsten parts. This technique enables the production of intricate components while simultaneously reducing production lead times and associated costs. However, the inherent brittleness of tungsten and its tendency to crack under high-temperature conditions pose significant challenges to the manufacturing process. Our research primarily focuses on the process of rolling tungsten parts in a layer-by-layer manner in LPBF and the subsequent changes in microstructure. Our objective is not only to identify the alterations in the microstructure but also to assess their implications on the physical properties and performance of the fabricated tungsten parts. To examine these aspects, we conducted an extensive series of experiments that included the fabrication of tungsten samples through LPBF and subsequent characterization using advanced materials analysis techniques. These investigations allowed us to scrutinize shifts in various microstructural features, including, but not limited to, grain size and grain boundaries occurring during the rolling process. The results of our study provide crucial insights into how specific factors, such as plastic deformation occurring during the rolling process, influence the microstructural characteristics of the fabricated parts. This information is vital as it provides a foundation for understanding how the parameters of the layer-by-layer rolling process affect the final tungsten parts. Our research significantly broadens the current understanding of microstructural evolution in tungsten parts produced via the layer-by-layer rolling process in LPBF. The insights obtained will play a pivotal role in refining and optimizing manufacturing parameters, thus improving the mechanical properties of tungsten parts and, therefore, enhancing their performance. Furthermore, these findings will contribute to the advancement of manufacturing techniques, facilitating the wider application of tungsten parts in various high-demand sectors. Through these advancements, this research represents a significant step towards harnessing the full potential of tungsten in high-temperature and high-stress applications.

Keywords: additive manufacturing, rolling, tungsten, refractory materials

Procedia PDF Downloads 67
411 Tumour-Associated Tissue Eosinophilia as a Prognosticator in Oral Squamous Cell Carcinoma

Authors: Karen Boaz, C. R. Charan

Abstract:

Background: The infiltration of tumour stroma by eosinophils, Tumor-Associated Tissue Eosinophilia (TATE), is known to modulate the progression of Oral Squamous Cell Carcinoma (OSCC). Eosinophils have direct tumoricidal activity by release of cytotoxic proteins and indirectly they enhance permeability into tumor cells enabling penetration of tumoricidal cytokines. Also, eosinophils may promote tumor angiogenesis by production of several angiogenic factors. Identification of eosinophils in the inflammatory stroma has been proven to be an important prognosticator in cancers of mouth, oesophagus, larynx, pharynx, breast, lung, and intestine. Therefore, the study aimed to correlate TATE with clinical and histopathological variables, and blood eosinophil count to assess the role of TATE as a prognosticator in Oral Squamous Cell Carcinoma (OSCC). Methods: Seventy two biopsy-proven cases of OSCC formed the study cohort. Blood eosinophil counts and TNM stage were obtained from the medical records. Tissue sections (5µm thick) were stained with Haematoxylin and Eosin. The eosinophils were quantified at invasive tumour front (ITF) in 10HPF (40x magnification) with an ocular grid. Bryne’s grading of ITF was also performed. A subset of thirty cases was also assessed for association of TATE with recurrence, involvement of lymph nodes and surgical margins. Results: 1) No statistically significant correlation was found between TATE and TNM stage, blood eosinophil counts and most parameters of Bryne’s grading system. 2) Statistically significant relation of intense degree of TATE was associated with the absence of distant metastasis, increased lympho-plasmacytic response and increased survival (diseasefree and overall) of OSCC patients. 3) In the subset of 30 cases, tissue eosinophil counts were higher in cases with lymph node involvement, decreased survival, without margin involvement and in cases that did not recur. Conclusion: While the role of eosinophils in mediating immune responses seems ambiguous as eosinophils support cell-mediated tumour immunity in early stages while inhibiting the same in advanced stages, TATE may be used as a surrogate marker for determination of prognosis in oral squamous cell carcinoma.

Keywords: tumour-associated tissue eosinophilia, oral squamous cell carcinoma, prognosticator, tumoral immunity

Procedia PDF Downloads 223
410 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context

Authors: Rit M., Girard R., Villot J., Thorel M.

Abstract:

In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.

Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology

Procedia PDF Downloads 47
409 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones

Authors: Mohamed Abdelkareem

Abstract:

Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.

Keywords: GIS, remote sensing, groundwater, Egypt

Procedia PDF Downloads 77
408 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 50
407 An Autonomous Space Debris-Removal System for Effective Space Missions

Authors: Shriya Chawla, Vinayak Malhotra

Abstract:

Space exploration has noted an exponential rise in the past two decades. The world has started probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris has increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometres per hour. Debris are well known as a potential damage to the future missions with immense loss of resources, mankind, and huge amount of money is invested in active research on them. Appreciable work had been done in the past relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that active space debris-removal technologies, such as harpoons and nets, still would not be enough. Thus, necessitating the need for better and operative space debris removal system. Techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objectives of the work are 1) to thoroughly analyse the existing and conventional debris removal techniques, their working, effectiveness and limitations under varying conditions, 2) to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages. The system is likely to enhance the probability of effective space debris removal. At present, systematic theoretical study is being carried out to thoroughly observe the effects of pseudo-random debris occurrences and to originate an optimal design with much better features and control.

Keywords: space exploration, debris removal, space crafts, space accidents

Procedia PDF Downloads 140
406 Variables, Annotation, and Metadata Schemas for Early Modern Greek

Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara

Abstract:

Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.

Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.

Procedia PDF Downloads 37
405 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River

Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko

Abstract:

Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.

Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling

Procedia PDF Downloads 242
404 Corpora in Secondary Schools Training Courses for English as a Foreign Language Teachers

Authors: Francesca Perri

Abstract:

This paper describes a proposal for a teachers’ training course, focused on the introduction of corpora in the EFL didactics (English as a foreign language) of some Italian secondary schools. The training course is conceived as a part of a TEDD participant’s five months internship. TEDD (Technologies for Education: diversity and devices) is an advanced course held by the Department of Engineering and Information Technology at the University of Trento, Italy. Its main aim is to train a selected, heterogeneous group of graduates to engage with the complex interdependence between education and technology in modern society. The educational approach draws on a plural coexistence of various theories as well as socio-constructivism, constructionism, project-based learning and connectivism. TEDD educational model stands as the main reference source to the design of a formative course for EFL teachers, drawing on the digitalization of didactics and creation of learning interactive materials for L2 intermediate students. The training course lasts ten hours, organized into five sessions. In the first part (first and second session) a series of guided and semi-guided activities drive participants to familiarize with corpora through the use of a digital tools kit. Then, during the second part, participants are specifically involved in the realization of a ML (Mistakes Laboratory) where they create, develop and share digital activities according to their teaching goals with the use of corpora, supported by the digital facilitator. The training course takes place into an ICT laboratory where the teachers work either individually or in pairs, with a computer connected to a wi-fi connection, while the digital facilitator shares inputs, materials and digital assistance simultaneously on a whiteboard and on a digital platform where participants interact and work together both synchronically and diachronically. The adoption of good ICT practices is a fundamental step to promote the introduction and use of Corpus Linguistics in EFL teaching and learning processes, in fact dealing with corpora not only promotes L2 learners’ critical thinking and orienteering versus wild browsing when they are looking for ready-made translations or language usage samples, but it also entails becoming confident with digital tools and activities. The paper will explain reasons, limits and resources of the pedagogical approach adopted to engage EFL teachers with the use of corpora in their didactics through the promotion of digital practices.

Keywords: digital didactics, education, language learning, teacher training

Procedia PDF Downloads 131
403 Ethnic Xenophobia as Symbolic Politics: An Explanation of Anti-Migrant Activity from Brussels to Beirut

Authors: Annamarie Rannou, Horace Bartilow

Abstract:

Global concerns about xenophobic activity are on the rise across developed and developing countries. And yet, social science scholarship has almost exclusively examined xenophobia as a prejudice of advanced western nations. This research argues that the fields of study related to xenophobia must be re-conceptualized within a framework of ethnicity in order to level the playing field for cross-regional inquiry. This study develops a new concept of ethnic xenophobia and integrates existing explanations of anti-migrant expression into theories of ethnic threat. We argue specifically that political elites convert economic, political, and social threats at the national level into ethnic xenophobic activity in order to gain or maintain political advantage among their native selectorate. We expand on Stuart Kaufman’s theory of symbolic politics to underscore the methods of mobilization used against migrants and the power of elite discourse in moments of national crises. An original dataset is used to examine over 35,000 cases of ethnic xenophobic activity targeting refugees. Wordscores software is used to develop a unique measure of anti-migrant elite rhetoric which captures the symbolic discourse of elites in their mobilization of ethnic xenophobic activism. We use a Structural Equation Model (SEM) to test the causal pathways of the theory across seventy-two developed and developing countries from 1990 to 2016. A framework of Most Different Systems Design (MDSD) is also applied to two pairs of developed-developing country cases, including Kenya and the Netherlands and Lebanon and the United States. This study sheds tremendous light on an underrepresented area of comparative research in migration studies. It shows that the causal elements of anti-migrant activity are far more similar than existing research suggests which has major implications for policy makers, practitioners, and academics in fields of migration protection and advocacy. It speaks directly to the mobilization of myths surrounding refugees, in particular, and the nationalization of narratives of migration that may be neutralized by the development of deeper associational relationships between natives and migrants.

Keywords: refugees, ethnicity, symbolic politics, elites, migration, comparative politics

Procedia PDF Downloads 126
402 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography

Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon

Abstract:

Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.

Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre

Procedia PDF Downloads 58
401 Parents and Stakeholders’ Perspectives on Early Reading Intervention Implemented as a Curriculum for Children with Learning Disabilities

Authors: Bander Mohayya Alotaibi

Abstract:

The valuable partnerships between parents and teachers may develop positive and effective interactions between home and school. This will help these stakeholders share information and resources regarding student academics during ongoing interactions. Thus, partnerships will build a solid foundation for both families and schools to help children succeed in school. Parental involvement can be seen as an effective tool that can change homes and communities and not just schools’ systems. Seeking parents and stakeholders’ attitudes toward learning and learners can help schools design a curriculum. Subsequently, this information can be used to find ways to help improve the academic performance of students, especially in low performing schools. There may be some conflicts when designing curriculum. In addition, designing curriculum might bring more educational expectations to all the sides. There is a lack of research that targets the specific attitude of parents toward specific concepts on curriculum contents. More research is needed to study the perspective that parents of children with learning disabilities (LD) have regarding early reading curriculum. Parents and stakeholders’ perspectives on early reading intervention implemented as a curriculum for children with LD was studied through an advanced quantitative research. The purpose of this study seeks to understand stakeholders and parents’ perspectives of key concepts and essential early reading skills that impact the design of curriculum that will serve as an intervention for early struggler readers who have LD. Those concepts or stages include phonics, phonological awareness, and reading fluency as well as strategies used in house by parents. A survey instrument was used to gather the data. Participants were recruited through 29 schools and districts of the metropolitan area of the northern part of Saudi Arabia. Participants were stakeholders including parents of children with learning disability. Data were collected using distribution of paper and pen survey to schools. Psychometric properties of the instrument were evaluated for the validity and reliability of the survey; face validity, content validity, and construct validity including an Exploratory Factor Analysis were used to shape and reevaluate the structure of the instrument. Multivariate analysis of variance (MANOVA) used to find differences between the variables. The study reported the results of the perspectives of stakeholders toward reading strategies, phonics, phonological awareness, and reading fluency. Also, suggestions and limitations are discussed.

Keywords: stakeholders, learning disability, early reading, perspectives, parents, intervention, curriculum

Procedia PDF Downloads 129
400 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 358
399 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 114
398 Phytoremediation Alternative for Landfill Leachate Sludges Doña Juana Bogotá D.C. Colombia Treatment

Authors: Pinzón Uribe Luis Felipe, Chávez Porras Álvaro, Ruge Castellanos Liliana Constanza

Abstract:

According to global data, solid waste management of has low economic investment for its management in underdeveloped countries; being the main factor the advanced technologies acknowledge for proper operation and at the same time the technical development. Has been evidenced that communities have a distorted perception of the role and legalized final destinations for waste or "Landfill" places specific management; influenced primarily by their physical characteristics and the information that the media provide of these, as well as their wrong association with "open dumps". One of the major inconveniences in these landfills is the leachate sludge management from treatment plants; as this exhibit a composition highly contaminating (physical, chemical and biological) for the natural environment due to improper handling and disposal. This is the case Landfill Doña Juana (RSDJ), Bogotá, Colombia, considered among the largest in South America; where management problems have persisted for decades, since its creation being definitive on the concept that society has acquired about this form of waste disposal and improper leachate handling. Within this research process for treating phytoremediation alternatives were determined by using plants that are able to degrade heavy metals contained in these; allowing the resulting sludge to be used as a seal in the final landfill cover; within a restoration process, providing option to solve the landscape contamination problem, as well as in the communities perception and conflicts that generates landfill. For the project chemical assays were performed in sludge leachate that allowed the characterization of metals such as chromium (Cr), lead (Pb), arsenic (As) and mercury (Hg), in order to meet the amount in the biosolids regard to the provisions of the USEPA 40 CFR 503. The evaluations showed concentrations of 102.2 mg / kg of Cr, 0.49 mg / kg Pb, 0.390 mg / kg of As and 0.104 mg / kg of Hg; being lower than of the standards. A literature review on native plant species suitable for an alternative process of phytoremediation, these metals degradation capable was developed. Concluding that among them, Vetiveria zizanioides, Eichhornia crassipes and Limnobium laevigatum, for their hiperacumulativas in their leaves, stems and roots characteristics may allow these toxic elements reduction of in the environment, improving the outlook for disposal.

Keywords: health, filling slurry of leachate, heavy metals, phytoremediation

Procedia PDF Downloads 307
397 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 97
396 Hierarchical Zeolites as Catalysts for Cyclohexene Epoxidation Reactions

Authors: Agnieszka Feliczak-Guzik, Paulina Szczyglewska, Izabela Nowak

Abstract:

A catalyst-assisted oxidation reaction is one of the key reactions exploited by various industries. Their conductivity yields essential compounds and intermediates, such as alcohols, epoxides, aldehydes, ketones, and organic acids. Researchers are devoting more and more attention to developing active and selective materials that find application in many catalytic reactions, such as cyclohexene epoxidation. This reaction yields 1,2-epoxycyclohexane and 1,2-diols as the main products. These compounds are widely used as intermediates in the perfume industry and synthesizing drugs and lubricants. Hence, our research aimed to use hierarchical zeolites modified with transition metal ions, e.g., Nb, V, and Ta, in the epoxidation reaction of cyclohexene using microwaveheating. Hierarchical zeolites are materials with secondary porosity, mainly in the mesoporous range, compared to microporous zeolites. In the course of the research, materials based on two commercial zeolites, with Faujasite (FAU) and Zeolite Socony Mobil-5 (ZSM-5) structures, were synthesized and characterized by various techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and low-temperature nitrogen adsorption/desorption isotherms. The materials obtained were then used in a cyclohexene epoxidation reaction, which was carried out as follows: catalyst (0.02 g), cyclohexene (0.1 cm3), acetonitrile (5 cm3) and dihydrogen peroxide (0.085 cm3) were placed in a suitable glass reaction vessel with a magnetic stirrer inside in a microwave reactor. Reactions were carried out at 45° C for 6 h (samples were taken every 1 h). The reaction mixtures were filtered to separate the liquid products from the solid catalyst and then transferred to 1.5 cm3 vials for chromatographic analysis. The test techniques confirmed the acquisition of additional secondary porosity while preserving the structure of the commercial zeolite (XRD and low-temperature nitrogen adsorption/desorption isotherms). The results of the activity of the hierarchical catalyst modified with niobium in the cyclohexene epoxidation reaction indicate that the conversion of cyclohexene, after 6 h of running the process, is about 70%. As the main product of the reaction, 2-cyclohexanediol was obtained (selectivity > 80%). In addition to the mentioned product, adipic acid, cyclohexanol, cyclohex-2-en-1-one, and 1,2-epoxycyclohexane were also obtained. Furthermore, in a blank test, no cyclohexene conversion was obtained after 6 h of reaction. Acknowledgments The work was carried out within the project “Advanced biocomposites for tomorrow’s economy BIOG-NET,” funded by the Foundation for Polish Science from the European Regional Development Fund (POIR.04.04.00-00-1792/18-00.

Keywords: epoxidation, oxidation reactions, hierarchical zeolites, synthesis

Procedia PDF Downloads 54
395 Childhood Cataract: A Socio-Clinical Study at a Public Sector Tertiary Eye Care Centre in India

Authors: Deepak Jugran, Rajesh Gill

Abstract:

Purpose: To study the demographic, sociological, gender and clinical profile of the children presented for childhood cataract at a public sector tertiary eye care centre in India. Methodology: The design of the study is retrospective, and hospital-based data is available with the Central Registration Department of the PGIMER, Chandigarh. The majority of the childhood cataract cases are being reported in this hospital, yet not each and every case of childhood cataract approaches PGI, Chandigarh. Nevertheless, this study is going to be pioneering research in India, covering five-year data of the childhood cataract patients who visited the Advanced Eye Centre, PGIMER, Chandigarh, from 1.1.2015 to 31.12.2019. The SPSS version 23 was used for all statistical calculations. Results: A Total of 354 children were presented for childhood cataract from 1.1.2015 to 31.12.2019. Out of 354 children, 248 (70%) were male, and 106 (30%) were female. In-spite of 2 flagship programmes, namely the National Programme for Control of Blindness (NPCB) and Aayushman Bharat (PM JAY) for eradication of cataract, no children received any financial assistance from these two programmes. A whopping 99% of these children belong to the poor families. In most of these families, the mothers were house-wives and did not employ anywhere. These interim results will soon be conveyed to the Govt. of India so that a suitable mechanism can be evolved to address this pertinent issue. Further, the disproportionate ratio of male and female children in this study is an area of concern as we don’t know whether the prevalence of childhood cataract is lower in female children or they are not being presented on time in the hospital by the families. Conclusion: The World Health Organization (WHO) has categorized Childhood blindness resulting from cataract as a priority area and urged all member countries to develop institutionalized mechanisms for its early detection, diagnosis and management. The childhood cataract is an emerging and major cause of preventable and avoidable childhood blindness, especially in low and middle-income countries. In the formative years, the children require a sound physical, mental and emotional state, and in the absence of either one of them, it can severely dent their future growth. The recent estimate suggests that India could suffer an economic loss of US$12 billion (Rs. 88,000 Crores) due to blindness, and almost 35% of cases of blindness are preventable and avoidable if detected at an early age. Besides reporting these results to the policy makers, synchronized efforts are needed for early detection and management of avoidable causes of childhood blindness such as childhood cataract.

Keywords: childhood blindness, cataract, Who, Npcb

Procedia PDF Downloads 87
394 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 154
393 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: building sector, economic-ecological assessment, heat, LCA, quarter level

Procedia PDF Downloads 205
392 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 453
391 Synthesis and Characterization of Highly Oriented Bismuth Oxyiodide Thin Films for the Photocatalytical Degradation of Pharmaceuticals Compounds in Water

Authors: Juan C. Duran-Alvarez, Daniel Mejia, Rodolfo Zanella

Abstract:

Heterogeneous photocatalysis is a promising method to achieve the complete degradation and mineralization of organic pollutants in water via their exhaustive oxidation. In order to take this advanced oxidation process towards sustainability, it is necessary to reduce the energy consumption, referred as the light sources and the post-treatment operations. For this, the synthesis of new nanostructures of low band gap semiconductors in the form of thin films is in continuous development. In this work, thin films of the low band gap semiconductor bismuth oxyiodide (BiOI) were synthesized via the Successive Ionic Layer Adsorption and Reaction (SILAR) method. For this, Bi(NO3)3 and KI solutions were prepared, and glass supports were immersed in each solution under strict rate and time immersion conditions. Synthesis was performed at room temperature and a washing step was set prior to each immersion. Thin films with an average thickness below 100 nm were obtained upon a cycle of 30 immersions, as determined by AFM and profilometry measurements. Cubic BiOI nanocrystals with average size of 17 nm and a high orientation to the 001 plane were observed by XRD. In order to optimize the synthesis method, several Bi/I ratios were tested, namely 1/1, 1/5, 1/10, 1/20 and 1/50. The highest crystallinity of the BiOI films was observed when the 1/5 ratio was used in the synthesis. Non-stoichiometric conditions also resulted in the highest uniformity of the thin layers. PVP was used as an additive to improve the adherence of the BiOI thin films to the support. The addition of 0.1 mg/mL of PVP during the washing step resulted in the highest adherence of the thin films. In photocatalysis tests, degradation rate of the antibiotic ciprofloxacin as high as 75% was achieved using visible light (380 to 700 nm) irradiation for 5 h in batch tests. Mineralization of the antibiotic was also observed, although in a lower extent; ~ 30% of the total organic carbon was removed upon 5 h of visible light irradiation. Some ciprofloxacin by-products were identified throughout the reaction; and some of these molecules displayed residual antibiotic activity. In conclusion, it is possible to obtain highly oriented BiOI thin films under ambient conditions via the SILAR method. Non-stoichiometric conditions using PVP additive are necessary to increase the crystallinity and adherence of the films, which are photocatalytically active to remove recalcitrant organic pollutants under visible light irradiation.

Keywords: bismuth oxyhalides, photocatalysis, thin films, water treatment

Procedia PDF Downloads 98
390 Advanced Separation Process of Hazardous Plastics and Metals from End-Of-Life Vehicles Shredder Residue by Nanoparticle Froth Flotation

Authors: Srinivasa Reddy Mallampati, Min Hee Park, Soo Mim Cho, Sung Hyeon Yoon

Abstract:

One of the issues of End of Life Vehicles (ELVs) recycling promotion is technology for the appropriate treatment of automotive shredder residue (ASR). Owing to its high heterogeneity and variable composition (plastic (23–41%), rubber/elastomers (9–21%), metals (6–13%), glass (10–20%) and dust (soil/sand) etc.), ASR can be classified as ‘hazardous waste’, on the basis of the presence of heavy metals (HMs), PCBs, BFRs, mineral oils, etc. Considering their relevant concentrations, these metals and plastics should be properly recovered for recycling purposes before ASR residues are disposed of. Brominated flame retardant additives in ABS/HIPS and PVC may generate dioxins and furans at elevated temperatures. Moreover, these BFRs additives present in plastic materials may leach into the environment during landfilling operations. ASR thermal process removes some of the organic material but concentrates, the heavy metals and POPs present in the ASR residues. In the present study, Fe/Ca/CaO nanoparticle assisted ozone treatment has been found to selectively hydrophilize the surface of ABS/HIPS and PVC plastics, enhancing its wettability and thereby promoting its separation from ASR plastics by means of froth flotation. The water contact angles, of ABS/HIPS and PVC decreased, about 18.7°, 18.3°, and 17.9° in ASR respectively. Under froth flotation conditions at 50 rpm, about 99.5% and 99.5% of HIPS in ASR samples sank, resulting in a purity of 98% and 99%. Furthermore, at 150 rpm a 100% PVC separation in the settled fraction, with 98% of purity in ASR, respectively. Total recovery of non-ABS/HIPS and PVC plastics reached nearly 100% in the floating fraction. This process improved the quality of recycled ASR plastics by removing surface contaminants or impurities. Further, a hybrid ball-milling and with Fe/Ca/CaO nanoparticle froth flotation process was established for the recovery of HMs from ASR. After ball-milling with Fe/Ca/CaO nanoparticle additives, the flotation efficiency increased to about 55 wt% and the HMs recovery were also increased about 90% for the 0.25 mm size fractions of ASR. Coating with Fe/Ca/CaO nanoparticles associated with subsequent microbubble froth flotation allowed the air bubbles to attach firmly on the HMs. SEM–EDS maps showed that the amounts of HMs were significant on the surface of the floating ASR fraction. This result, along with the low HM concentration in the settled fraction, was confirmed by elemental spectra and semi-quantitative SEM–EDS analysis. Developed hybrid preferential hazardous plastics and metals separation process from ASR is a simple, highly efficient, and sustainable procedure.

Keywords: end of life vehicles shredder residue, hazardous plastics, nanoparticle froth flotation, separation process

Procedia PDF Downloads 258
389 The Various Bodies of a Person and How to Cleanse Them Spiritually

Authors: J. B. Athavale, Sean Clarke

Abstract:

Introduction According to ancient Indian scriptures, a person’s consciousness includes the physical body, the vital energy sheath (Pranshakti), the mental body (which includes one’s feelings and emotions), the intellectual body (which refers to one’s decision-making ability), and the Soul (which is the God Principle that resides in every person). Apart from the physical body, all the other aspects are subtle in nature. In today’s world, much attention is given to one’s physical appearance and intellectual prowess. While there have been improvements in the attention given to mental health, its complete nature is not understood, and in many cultures, mental ill health is considered taboo and looked down upon. Regarding the spiritual well-being of a person, our spiritual research has shown that people’s understanding and efforts are mostly lacking and superficial as they do not conform to Universal Spiritual Principles. Also, true well-being occurs only when all the bodies are healthy. Methodology The spiritual research team at the University has found that the spiritual aspect of a person’s life affects all the physical, psychological, and intellectual bodies of a person resulting in ill health. Cleansing these bodies at a spiritual level is essential to regain well-being. Using Aura and Energy Scanners and advanced sixth sense, we studied what causes spiritual impurity in various bodies and how to cleanse them. We measured the spiritual vibrations of a person and how they get affected due to various daily activities. For example, we studied the difference in a person’s aura before and after applying chemical-based makeup vs. natural makeup. Key Findings From the various spiritual research experiments we conducted, we found that: • All our actions and our thoughts affect our various bodies and have the potential to change the aura for the better or worse. • When there is an increase in negative vibrations around a person, negative energies from the subtle dimension are more likely to affect a person. • As the person’s spiritual level increases, the positivity in their aura also increases, and it is much easier to cleanse the various bodies spiritually. • Spiritual practice is like a general spiritual tonic that increases the positivity in one’s aura. The benefits of this are that it leads to mental stability and intellectual clarity. • Spiritual healing remedies augment any spiritual practice to obtain a faster healing effect. Conclusion Taking care of oneself spiritually has a positive halo effect on all one’s bodies. Spiritual cleansing is required regularly if one wants to attain a state of well-being. Spiritual practice and spiritual healing lead to spiritual growth, stability of mind, and less stress and reactions. Spiritually purer people affect the environment positively, and there is less unrest and more harmony between man and nature.

Keywords: body, spirituality, cleansing, consciousness

Procedia PDF Downloads 55
388 Criminal Justice Debt Cause-Lawyering: An Analysis of Reform Strategies

Authors: Samuel Holder

Abstract:

Mass incarceration in the United States is a human rights issue, not merely a civil rights problem. It is a human rights problem not only because the United States has a high rate of incarceration, but more importantly because of who is jailed, for what purpose they are jailed and, ultimately, the manner in which they are jailed. To sustain the scale of the criminal justice system, one of the darker policies involves a multi-tiered strategy of fee- and fine-collection, targeting, usually, the most vulnerable and poor, many of whom run into the law via small offenses that do not rise to the level of felonies. This paper advances the notion that this debt collection-to-incarceration pipeline is tantamount to a modern-day debtors’ prison system. This article seeks to confront the thorny issue of incarceration via criminal justice debt from a human rights and cause-lawyering position. It will argue that a two-pronged cause-lawyering strategy: the first focused on traditional litigation along constitutional grounds, and the second, an advocacy approach rooted in grassroots campaigns, designed to shift the normative operation and understanding of the rights of marginalized and racialized offenders. Ultimately, the argument suggests that this approach will be effective in combatting the (often highly privatized) criminal justice debt system and bring the roles of 'incapacitation, rehabilitation, deterrence, and retribution' back into the criminal justice legal conversation. Part I contextualizes and historicizes the role of fees, penalties, and fines in American criminal justice. Part II examines the emergence of private industry in the criminal justice system, and its role in the acceleration of profit-driven criminal justice debt collection and incarceration. Part III addresses the failures of the federal and state law and legislation in combatting predatory incarceration and debt collection in the criminal justice system, particularly as waged against the indigent and/or ethnically or racially marginalized. Part IV examines the potential for traditional cause-lawyering litigation along constitutional grounds, using case studies across contexts for illustration. Finally, Part V will review the radical cause-lawyer’s role in the normative struggle in redefining prisoners’ rights and the rights of the marginalized (and racialized) as they intersect at the crossroads of criminal justice debt. This paper will conclude with recommendations for litigation and advocacy, drawing on hypotheses advanced, and informed by case studies from a variety of both national and international jurisdictions.

Keywords: cause-lawyering, criminal justice debt, human rights, judicial fees

Procedia PDF Downloads 146
387 Postoperative Radiotherapy in Cancers of the Larynx: Experience of the Emir Abdelkader Cancer Center of Oran, about 89 Cases

Authors: Taleb Lotfi, Benarbia Maheidine, Allam Hamza, Boutira Fatima, Boukerche Abdelbaki

Abstract:

Introduction and purpose of the study: This is a retrospective single-center study with an analytical aim to determine the prognostic factors for relapse in patients treated with radiotherapy after total laryngectomy with lymph node dissection for laryngeal cancer at the Emir Abdelkader cancer center in Oran (Algeria). Material and methods: During the study period from January 2014 to December 2018, eighty-nine patients (n=89) with squamous cell carcinoma of the larynx were treated with postoperative radiotherapy. Relapse-free survival was studied in the univariate analysis according to pre-treatment criteria using Kaplan-Meier survival curves. We performed a univariate analysis to identify relapse factors. Statistically significant factors have been studied in the multifactorial analysis according to the Cox model. Results and statistical analysis: The average age was 62.7 years (40-86 years). It was a squamous cell carcinoma in all cases. Postoperatively, the tumor was classified as pT3 and pT4 in 93.3% of patients. Histological lymph node involvement was found in 36 cases (40.4%), with capsule rupture in 39% of cases, while the limits of surgical excision were microscopically infiltrated in 11 patients (12.3%). Chemotherapy concomitant with radiotherapy was used in 67.4% of patients. With a median follow-up of 57 months (23 to 104 months), the probabilities of relapse-free survival and five-year overall survival are 71.2% and 72.4%, respectively. The factors correlated with a high risk of relapse were locally advanced tumor stage pT4 (p=0.001), tumor site in case of subglottic extension (p=0.0003), infiltrated surgical limits R1 (p=0.001), l lymph node involvement (p=0.002), particularly in the event of lymph node capsular rupture (p=0.0003) as well as the time between surgery and adjuvant radiotherapy (p=0.001). However, in the subgroup analysis, the major prognostic factors for disease-free survival were subglottic tumor extension (p=0.001) and time from surgery to adjuvant radiotherapy (p=0.005). Conclusion: Combined surgery and postoperative radiation therapy are effective treatment modalities in the management of laryngeal cancer. Close cooperation of the entire cervicofacial oncology team is essential, expressed during a multidisciplinary consultation meeting, with the need to respect the time between surgery and radiotherapy.

Keywords: laryngeal cancer, laryngectomy, postoperative radiotherapy, survival

Procedia PDF Downloads 82
386 Alveolar Ridge Preservation in Post-extraction Sockets Using Concentrated Growth Factors: A Split-Mouth, Randomized, Controlled Clinical Trial

Authors: Sadam Elayah

Abstract:

Background: One of the most critical competencies in advanced dentistry is alveolar ridge preservation after exodontia. The aim of this clinical trial was to assess the impact of autologous concentrated growth factor (CGF) as a socket-filling material and its ridge preservation properties following the lower third molar extraction. Materials and Methods: A total of 60 sides of 30 participants who had completely symmetrical bilateral impacted lower third molars were enrolled. The short-term outcome variables were wound healing, swelling and pain, clinically assessed at different time intervals (1st, 3rd & 7th days). While the long-term outcome variables were bone height & width, bone density and socket surface area in the coronal section. Cone beam computed tomography images were obtained immediately after surgery and three months after surgery as a temporal measure. Randomization was achieved by opaque, sealed envelopes. Follow-up data were compared to baseline using Paired & Unpaired t-tests. Results: The wound healing index was significantly better in the test sides (P =0.001). Regarding the facial swelling, the test sides had significantly fewer values than the control sides, particularly on the 1st (1.01±.57 vs 1.55 ±.56) and 3rd days (1.42±0.8 vs 2.63±1.2) postoperatively. Nonetheless, the swelling disappeared within the 7th day on both sides. The pain scores of the visual analog scale were not a statistically significant difference between both sides on the 1st day; meanwhile, the pain scores were significantly lower on the test sides compared with the control sides, especially on the 3rd (P=0.001) and 7th days (P˂0.001) postoperatively. Regarding long-term outcomes, CGF sites had higher values in height and width when compared to Control sites (Buccal wall 32.9±3.5 vs 29.4±4.3 mm, Lingual wall 25.4±3.5 vs 23.1±4 mm, and Alveolar bone width 21.07±1.55vs19.53±1.90 mm) respectively. Bone density showed significantly higher values in CGF sites than in control sites (Coronal half 200±127.3 vs -84.1±121.3, Apical half 406.5±103 vs 64.2±158.6) respectively. There was a significant difference between both sites in reducing periodontal pockets. Conclusion: CGF application following surgical extraction provides an easy, low-cost, and efficient option for alveolar ridge preservation. Thus, dentists may encourage using CGF during dental extractions, particularly when alveolar ridge preservation is required.

Keywords: platelet, extraction, impacted teeth, alveolar ridge, regeneration, CGF

Procedia PDF Downloads 46
385 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 240