Search results for: written description requirement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2791

Search results for: written description requirement

31 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience

Authors: Nancy Mackin

Abstract:

Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.

Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge

Procedia PDF Downloads 493
30 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 156
29 Effect of a Nutritional Supplement Containing Euterpe oleracea Mart., Inulin, Phaseolus vulgaris and Caralluma fimbriata in Persons with Metabolic Syndrome

Authors: Eduardo Cabrera-Rode, Janet Rodriguez, Aimee Alvarez, Ragmila Echevarria, Antonio D. Reyes, Ileana Cubas-Duenas, Silvia E. Turcios, Oscar Diaz-Diaz

Abstract:

Obex is a nutritional supplement to help weight loss naturally. In addition, this supplement has a satiating effect that helps control the craving to eat between meals. The purpose of this study was to evaluate the effect of Obex in the metabolic syndrome (MS). This was an open label pilot study conducted in 30 patients with MS and ages between 29 and 60 years old. Participants received Obex, at a dose of one sachet before (30 to 45 minutes) the two main meals (lunch and dinner) daily (mean two sachets per day) for 3 months. The content of the sachets was dissolved in a glass of water or fruit juice. Obex ingredients: Açai (Euterpe oleracea Mart.) berry, inulin, Phaseolus vulgaris, Caralluma fimbriata, inositol, choline, arginine, ornitine, zinc sulfate, carnitine fumarate, methionine, calcium pantothenate, pyridoxine and folic acid. In addition to anthropometric measures and blood pressure, fasting plasma glucose, total cholesterol, triglycerides and HDL-cholesterol and insulin were determined. Insulin resistance was assessed by HOMA-IR index. Three indirect indexes were used to calculate insulin sensitivity [QUICKI index (Quantitative insulin sensitivity check index), Bennett index and Raynaud index]. Metabolic syndrome was defined according to the Joint Interim Statement (JIS) criteria. The JIS criteria require at least three of the following components: (1) abdominal obesity (waist circumference major or equal major or equal 94 cm for men or 80 cm for women), (2) triglycerides major or equal 1.7 mmol/L, (3) HDL cholesterol minor 1.03 mmol/L for men or minor 1.30 mmol/L for women, (4) systolic/diastolic blood pressure major or equal 130/85mmHg or use antihypertensive drugs, and (5) fasting plasma glucose major or equal 5.6 mmol/L or known treatment for diabetes. This study was approved by the Ethical and Research Committee of the National Institute of Endocrinology, Cuba and conducted according to the Declaration of Helsinki. Obex is registered as a food supplement in the National Institute of Nutrition and Food, Havana, Cuba. Written consent was obtained from all patients before the study. The clinical trial had been registered at ClinicalTrials.gov. After three months of treatment, 43.3% (13/30) of participants decreased the frequency of MS. Compared to baseline, Obex significantly reduced body weight, BMI, waist circumference, and waist/hip ratio and improved HDL-c (p<0.0001) and in addition to lowering blood pressure (p<0.05). After Obex intake, subjects also have shown a reduction in fasting plasma glucose (p<0.0001) and insulin sensitivity was enhanced (p=0.001). No adverse effects were seen in any of the participants during the study. In this pilot study, consumption of Obex decreased the prevalence of MS due to the improved selected components of the metabolic syndrome, indicating that further studies are warranted. Obex emerges as an effective and well tolerated treatment for preventing or delaying MS and therefore potential reduction of cardiovascular risk.

Keywords: nutritional supplement, metabolic syndrome, weight loss, insulin resistance

Procedia PDF Downloads 271
28 Thematic Analysis of Ramayana Narrative Scroll Paintings: A Need for Knowledge Preservation

Authors: Shatarupa Thakurta Roy

Abstract:

Along the limelight of mainstream academic practices in Indian art, exist a significant lot of habitual art practices that are mutually susceptible in their contemporary forms. Narrative folk paintings of regional India has successfully dispersed to its audience social messages through pulsating pictures and orations. The paper consists of images from narrative scroll paintings on ‘Ramayana’ theme from various neighboring states as well as districts in India, describing their subtle differences in style of execution, method, and use of material. Despite sharing commonness in the choice of subject matter, habitual and ceremonial Indian folk art in its formative phase thrived within isolated locations to yield in remarkable variety in the art styles. The differences in style took place district wise, cast wise and even gender wise. An open flow is only evident in the contemporary expressions as a result of substantial changes in social structures, mode of communicative devices, cross-cultural exposures and multimedia interactivities. To decipher the complex nature of popular cultural taste of contemporary India it is important to categorically identify its root in vernacular symbolism. The realization of modernity through European primitivism was rather elevated as a perplexed identity in Indian cultural margin in the light of nationalist and postcolonial ideology. To trace the guiding factor that has still managed to obtain ‘Indianness’ in today’s Indian art, researchers need evidences from the past that are yet to be listed in most instances. They are commonly created on ephemeral foundations. The artworks are also found in endangered state and hence, not counted much friendly for frequent handling. The museums are in dearth of proper technological guidelines to preserve them. Even though restoration activities are emerging in the country, the existing withered and damaged artworks are in threat to perish. An immediacy of digital achieving is therefore envisioned as an alternative to save this cultural legacy. The method of this study is, two folded. It primarily justifies the richness of the evidences by conducting categorical aesthetic analysis. The study is supported by comments on the stylistic variants, thematic aspects, and iconographic identities alongside its anthropological and anthropomorphic significance. Further, it explores the possible ways of cultural preservation to ensure cultural sustainability that includes technological intervention in the form of digital transformation as an altered paradigm for better accessibility to the available recourses. The study duly emphasizes on visual description in order to culturally interpret and judge the rare visual evidences following Feldman’s four-stepped method of formal analysis combined with thematic explanation. A habitual design that emerges and thrives within complex social circumstances may experience change placing its principle philosophy at risk by shuffling and altering with time. A tradition that respires in the modern setup struggles to maintain timeless values that operate its creative flow. Thus, the paper hypothesizes the survival and further growth of this practice within the dynamics of time and concludes in realization of the urgency to transform the implicitness of its knowledge into explicit records.

Keywords: aesthetic, identity, implicitness, paradigm

Procedia PDF Downloads 340
27 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator

Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic

Abstract:

The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.

Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion

Procedia PDF Downloads 36
26 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo

Authors: Margaret Boone Rappaport, Christopher J. Corbally

Abstract:

The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.

Keywords: genetic drift, genomics, parietal expansion, religious capacity

Procedia PDF Downloads 315
25 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 106
24 Modern Technology for Strengthening Concrete Structures Makes Them Resistant to Earthquakes

Authors: Mohsen Abdelrazek Khorshid Ali Selim

Abstract:

Disadvantages and errors of current concrete reinforcement methodsL: Current concrete reinforcement methods are adopted in most parts of the world in their various doctrines and names. They adopt the so-called concrete slab system, where these slabs are semi-independent and isolated from each other and from the surrounding environment of concrete columns or beams, so that the reinforcing steel does not cross from one slab to another or from one slab to adjacent columns. It or the beams surrounding it and vice versa are only a few centimeters and no more. The same applies exactly to the concrete columns that support the building, where the reinforcing steel does not extend from the slabs or beams to the inside of the columns or vice versa except for a few centimeters and no more, just as the reinforcing steel does not extend from inside the column at the top. The ceiling is only a few centimetres, and the same thing is literally repeated in the concrete beams that connect the columns and separate the slabs, where the reinforcing steel does not cross from one beam to another or from one beam to the slabs or columns adjacent to it and vice versa, except for a few centimeters, which makes the basic building elements of columns, slabs and beams They all work in isolation from each other and from the environment surrounding them from all sides. This traditional method of reinforcement may be valid and lasting in geographical areas that are not exposed to earthquakes and earthquakes, where all the loads and tensile forces in the building are constantly directed vertically downward due to gravity and are borne directly by the vertical reinforcement of the building. However, in the case of earthquakes and earthquakes, the loads and tensile forces in the building shift from the vertical direction to the horizontal direction at an angle of inclination that depends on the strength of the earthquake, and most of them are borne by the horizontal reinforcement extending between the basic elements of the building, such as columns, slabs and beams, and since the crossing of the reinforcement between each of the columns, slabs and beams between them And each other, and vice versa, does not exceed several centimeters. In any case, the tensile strength, cohesion and bonding between the various parts of the building are very weak, which causes the buildings to disintegrate and collapse in the horrific manner that we saw in the earthquake in Turkey and Syria in February 2023, which caused the collapse of tens of thousands of buildings in A few seconds later, it left more than 50,000 dead, hundreds of thousands injured, and millions displaced. Description of the new earthquake-resistant model: The idea of the new model in the reinforcement of concrete buildings and constructions is based on the theory that we have formulated as follows: [The tensile strength, cohesion and bonding between the basic parts of the concrete building (columns, beams and slabs) increases as the lengths of the reinforcing steel bars increase and they extend and branch and the different parts of the building share them with each other.] . In other words, the strength, solidity, and cohesion of concrete buildings increase and they become resistant to earthquakes as the lengths of the reinforcing steel bars increase, extend, branch, and share with the various parts of the building, such as columns, beams, and slabs. That is, the reinforcing skewers of the columns must extend in their lengths without cutting to cross from one floor to another until their end. Likewise, the reinforcing skewers of the beams must extend in their lengths without cutting to cross from one beam to another. The ends of these skewers must rest at the bottom of the columns adjacent to the beams. The same thing applies to the reinforcing skewers of the slabs where they must These skewers should be extended in their lengths without cutting to cross from one tile to another, and the ends of these skewers should rest either under the adjacent columns or inside the beams adjacent to the slabs as follows: First, reinforce the columns: The columns have the lion's share of the reinforcing steel in this model in terms of type and quantity, as the columns contain two types of reinforcing bars. The first type is large-diameter bars that emerge from the base of the building, which are the nerves of the column. These bars must extend over their normal length of 12 meters or more and extend to a height of three floors, if desired. In raising other floors, bars with the same diameter and the same length are added to the top after the second floor. The second type is bars with a smaller diameter, and they are the same ones that are used to reinforce beams and slabs, so that the bars that reinforce the beams and slabs facing each column are bent down inside this column and along the entire length of the column. This requires an order. Most engineers do not prefer it, which is to pour the entire columns and pour the roof at once, but we prefer this method because it enables us to extend the reinforcing bars of both the beams and slabs to the bottom of the columns so that the entire building becomes one concrete block that is cohesive and resistant to earthquakes. Secondly, arming the cameras: The beams' reinforcing skewers must also extend to a full length of 12 meters or more without cutting. The ends of the skewers are bent and dropped inside the column at the beginning of the beam to its bottom. Then the skewers are extended inside the beam so that their other end falls under the facing column at the end of the beam. The skewers may cross over the head of a column. Another passes through another adjacent beam and rests at the bottom of a third column, according to the lengths of each of the skewers and beams. Third, reinforcement of slabs: The slab reinforcing skewers must also extend their entire length, 12 meters or more, without cutting, distinguishing between two cases. The first case is the skewers opposite the columns, and their ends are dropped inside one of the columns. Then the skewers cross inside the adjacent slab and their other end falls below the opposite column. The skewers may cross over The head of the adjacent column passes through another adjacent slab and rests at the bottom of a third column, according to the dimensions of the slabs and the lengths of the skewers. The second case is the skewers opposite the beams, and their ends must be bent in the form of a square or rectangle according to the dimensions of the beam’s width and height, and this square or rectangle is dropped inside the beam at the beginning of the slab, and it serves as The skewers are for the beams, then the skewers are extended along the length of the slab, and at the end of the slab, the skewers are bent down to the bottom of the adjacent beam in the shape of the letter U, after which the skewers are extended inside the adjacent slab, and this is repeated in the same way inside the other adjacent beams until the end of the skewer, then it is bent downward in the form of a square or rectangle inside the beam, as happened. In its beginning.

Keywords: earthquake resistant buildings, earthquake resistant concrete constructions, new technology for reinforcement of concrete buildings, new technology in concrete reinforcement

Procedia PDF Downloads 33
23 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 56
22 Problem, Policy and Polity in Agenda Setting: Analyzing Safe Motherhood Program in India

Authors: Vanita Singh

Abstract:

In developing countries, there are conflicting political agendas; policy makers have to prioritize issues from a list of issues competing for the limited resources. Thus, it is imperative to understand how some issues gain attention, and others lose in the policy circles. Multiple-Streams Theory of Kingdon (1984) is among the influential theories that help to understand the public policy process and is utilitarian for health policy makers to understand how certain health issues emerge on the policy agendas. The issue of maternal mortality was long standing in India and was linked with high birth rate thus the focus of maternal health policy was on family planning since India’s independence. However, a paradigm shift was noted in the maternal health policy in the year 1992 with the launch of Safe Motherhood Programme and then in the year 2005, when the agenda of maternal health policy became universalizing institutional deliveries and phasing-out of Traditional Birth Attendants (TBAs) from the health system. There were many solutions proposed by policy communities other than universalizing of institutional deliveries, including training of TBAs and improving socio-economic conditions of pregnant women. However, Government of India favored medical community, which was advocating for the policy of universalizing institutional delivery, and neglected the solutions proposed by other policy communities. It took almost 15 years for the advocates of institutional delivery to transform their proposed solution into a program - the Janani Suraksha Yojana (JSY), a safe-motherhood program promoting institutional delivery through cash incentives to pregnant women. Thus, the case of safe motherhood policy in India is worth studying to understand how certain issues/problems gain political attention and how advocacy work in policy circles. This paper attempts to understand the factors that favored the agenda of safe-motherhood in the policy circle in India, using John Kingdon’s Multiple-Stream model of agenda-setting. Through document analysis and literature review, the paper traces the evolution of safe motherhood program and maternal health policy. The study has used open source documents available on the website of Ministry of Health and Family Welfare, media reports (Times of India Archive) and related research papers. The documents analyzed include National health policy-1983, National Health Policy-2002, written reports of Ministry of Health and Family Welfare Department, National Rural Health Mission (NRHM) document, documents related to Janani Suraksha Yojana and research articles related to maternal health programme in India. The study finds that focusing events and credible indicators coupled with media attention has the potential to recognize a problem. The political elites favor clearly defined and well-accepted solutions. The trans-national organizations affect the agenda-setting process in a country through conditional resource provision. The closely-knit policy communities and political entrepreneurship are required for advocating solutions high on agendas. The study has implications for health policy makers in identifying factors that have the potential to affect the agenda-setting process for a desired policy agenda and identify the challenges in generating political priorities.

Keywords: agenda-setting, focusing events, Kingdon’s model, safe motherhood program India

Procedia PDF Downloads 112
21 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 155
20 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome

Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.

Abstract:

The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.

Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant

Procedia PDF Downloads 101
19 Governance Challenges for the Management of Water Resources in Agriculture: The Italian Way

Authors: Silvia Baralla, Raffaella Zucaro, Romina Lorenzetti

Abstract:

Water management needs to cope with economic, societal, and environmental changes. This could be guaranteed through 'shifting from government to governance'. In the last decades, it was applied in Europe through and within important legislative pillars (Water Framework Directive and Common Agricultural Policy) and their measures focused on resilience and adaptation to climate change, with particular attention to the creation of synergies among policies and all the actors involved at different levels. Within the climate change context, the agricultural sector can play, through sustainable water management, a leading role for climate-resilient growth and environmental integrity. A recent analysis on the water management governance of different countries identified some common gaps dealing with administrative, policy, information, capacity building, funding, objective, and accountability. The ability of a country to fill these gaps is an essential requirement to make some of the changes requested by Europe, in particular the improvement of the agro-ecosystem resilience to the effect of climatic change, supporting green and digital transitions, and sustainable water use. This research aims to contribute in sharing examples of water governances and related advantages useful to fill the highlighted gaps. Italy has developed a strong and exhaustive model of water governance in order to react with strategic and synergic actions since it is one of the European countries most threatened by climate change and its extreme events (drought, floods). In particular, the Italian water governance model was able to overcome several gaps, specifically as concerns the water use in agriculture, adopting strategies as a systemic/integrated approach, the stakeholder engagement, capacity building, the improvement of planning and monitoring ability, and an adaptive/resilient strategy for funding activities. They were carried out, putting in place regulatory, structural, and management actions. Regulatory actions include both the institution of technical committees grouping together water decision-makers and the elaboration of operative manuals and guidelines by means of a participative and cross-cutting approach. Structural actions deal with the funding of interventions within European and national funds according to the principles of coherence and complementarity. Finally, management actions regard the introduction of operational tools to support decision-makers in order to improve planning and monitoring ability. In particular, two cross-functional and interoperable web databases were introduced: SIGRIAN (National Information System for Water Resources Management in Agriculture) and DANIA (National Database of Investments for Irrigation and the Environment). Their interconnection allows to support sustainable investments, taking into account the compliance about irrigation volumes quantified in SIGRIAN, ensuring a high level of attention on water saving, and monitoring the efficiency of funding. Main positive results from the Italian water governance model deal with a synergic and coordinated work at the national, regional, and local level among institutions, the transparency on water use in agriculture, a deeper understanding from the stakeholder side of the importance of their roles and of their own potential benefits and the capacity to guarantee continuity to this model, through a sensitization process and the combined use of management operational tools.

Keywords: agricultural sustainability, governance model, water management, water policies

Procedia PDF Downloads 92
18 Evaluating Viability of Using South African Forestry Process Biomass Waste Mixtures as an Alternative Pyrolysis Feedstock in the Production of Bio Oil

Authors: Thembelihle Portia Lubisi, Malusi Ntandoyenkosi Mkhize, Jonas Kalebe Johakimu

Abstract:

Fertilizers play an important role in maintaining the productivity and quality of plants. Inorganic fertilizers (containing nitrogen, phosphorus, and potassium) are largely used in South Africa as they are considered inexpensive and highly productive. When applied, a portion of the excess fertilizer will be retained in the soil, a portion enters water streams due to surface runoff or the irrigation system adopted. Excess nutrient from the fertilizers entering the water stream eventually results harmful algal blooms (HABs) in freshwater systems, which not only disrupt wildlife but can also produce toxins harmful to humans. Use of agro-chemicals such as pesticides and herbicides has been associated with increased antimicrobial resistance (AMR) in humans as the plants are consumed by humans. This resistance of bacterial poses a threat as it prevents the Health sector from being able to treat infectious disease. Archaeological studies have found that pyrolysis liquids were already used in the time of the Neanderthal as a biocide and plant protection product. Pyrolysis is thermal degradation process of plant biomass or organic material under anaerobic conditions leading to production of char, bio-oils and syn gases. Bio-oil constituents can be categorized as water soluble (wood vinegar) and water insoluble fractions (tar and light oils). Wood vinegar (pyro-ligneous acid) is said to contain contains highly oxygenated compounds including acids, alcohols, aldehydes, ketones, phenols, esters, furans, and other multifunctional compounds with various molecular weights and compositions depending on the biomass material derived from and pyrolysis operating conditions. Various researchers have found the wood vinegar to be efficient in the eradication of termites, effective in plant protection and plant growth, has antibacterial characteristics and was found effective in inhibiting the micro-organisms such as candida yeast, E-coli, etc. This study investigated characterisation of South African forestry product processing waste with intention of evaluating the potential of using the respective biomass waste as feedstock for boil oil production via pyrolysis process. Ability to use biomass waste materials in production of wood-vinegar has advantages that it does not only allows for reduction of environmental pollution and landfill requirement, but it also does not negatively affect food security. The biomass wastes investigated were from the popular tree types in KZN, which are, pine saw dust (PSD), pine bark (PB), eucalyptus saw dust (ESD) and eucalyptus bark (EB). Furthermore, the research investigates the possibility of mixing the different wastes with an aim to lessen the cost of raw material separation prior to feeding into pyrolysis process and mixing also increases the amount of biomass material available for beneficiation. A 50/50 mixture of PSD and ESD (EPSD) and mixture containing pine saw dust; eucalyptus saw dust, pine bark and eucalyptus bark (EPSDB). Characterisation of the biomass waste will look at analysis such as proximate (volatiles, ash, fixed carbon), ultimate (carbon, hydrogen, nitrogen, oxygen, sulphur), high heating value, structural (cellulose, hemicellulose and lignin) and thermogravimetric analysis.

Keywords: characterisation, biomass waste, saw dust, wood waste

Procedia PDF Downloads 36
17 Modern Cardiac Surgical Outcomes in Nonagenarians: A Multicentre Retrospective Observational Study

Authors: Laurence Weinberg, Dominic Walpole, Dong-Kyu Lee, Michael D’Silva, Jian W. Chan, Lachlan F. Miles, Bradley Carp, Adam Wells, Tuck S. Ngun, Siven Seevanayagam, George Matalanis, Ziauddin Ansari, Rinaldo Bellomo, Michael Yii

Abstract:

Background: There have been multiple recent advancements in the selection, optimization and management of cardiac surgical patients. However, there is limited data regarding the outcomes of nonagenarians undergoing cardiac surgery, despite this vulnerable cohort increasingly receiving these interventions. This study describes the patient characteristics, management and outcomes of a group of nonagenarians undergoing cardiac surgery in the context of contemporary peri-operative care. Methods: A retrospective observational study was conducted of patients 90 to 99 years of age (i.e., nonagenarians) who had undergone cardiac surgery requiring a classic median sternotomy (i.e., open-heart surgery). All operative indications were included. Patients who underwent minimally invasive surgery, transcatheter aortic valve implantation and thoracic aorta surgery were excluded. Data were collected from four hospitals in Victoria, Australia, over an 8-year period (January 2012 – December 2019). The primary objective was to assess six-month mortality in nonagenarians undergoing open-heart surgery and to evaluate the incidence and severity of postoperative complications using the Clavien-Dindo classification system. The secondary objective was to provide a detailed description of the characteristics and peri-operative management of this group. Results: A total of 12,358 adult patients underwent cardiac surgery at the study centers during the observation period, of whom 18 nonagenarians (0.15%) fulfilled the inclusion criteria. The median (IQR) [min-max] age was 91 years (90.0:91.8) [90-94] and 14 patients (78%) were men. Cardiovascular comorbidities, polypharmacy and frailty, were common. The median (IQR) predicted in-hospital mortality by EuroSCORE II was 6.1% (4.1-14.5). All patients were optimized preoperatively by a multidisciplinary team of surgeons, cardiologists, geriatricians and anesthetists. All index surgeries were performed on cardiopulmonary bypass. Isolated coronary artery bypass grafting (CABG) and CABG with aortic valve replacement were the most common surgeries being performed in four and five patients, respectively. Half the study group underwent surgery involving two or more major procedures (e.g. CABG and valve replacement). Surgery was undertaken emergently in 44% of patients. All patients except one experienced at least one postoperative complication. The most common complications were acute kidney injury (72%), new atrial fibrillation (44%) and delirium (39%). The highest Clavien-Dindo complication grade was IIIb occurring once each in three patients. Clavien-Dindo grade IIIa complications occurred in only one patient. The median (IQR) postoperative length of stay was 11.6 days (9.8:17.6). One patient was discharged home and all others to an inpatient rehabilitation facility. Three patients had an unplanned readmission within 30 days of discharge. All patients had follow-up to at least six months after surgery and mortality over this period was zero. The median (IQR) duration of follow-up was 11.3 months (6.0:26.4) and there were no cases of mortality observed within the available follow-up records. Conclusion: In this group of nonagenarians undergoing cardiac surgery, postoperative six-month mortality was zero. Complications were common but generally of low severity. These findings support carefully selected nonagenarian patients being offered cardiac surgery in the context of contemporary, multidisciplinary perioperative care. Further, studies are needed to assess longer-term mortality and functional and quality of life outcomes in this vulnerable surgical cohort.

Keywords: cardiac surgery, mortality, nonagenarians, postoperative complications

Procedia PDF Downloads 91
16 [Keynote Speech]: Evidence-Based Outcome Effectiveness Longitudinal Study on Three Approaches to Reduce Proactive and Reactive Aggression in Schoolchildren: Group CBT, Moral Education, Bioneurological Intervention

Authors: Annis Lai Chu Fung

Abstract:

While aggression had high stability throughout developmental stages and across generations, it should be the top priority of researchers and frontline helping professionals to develop prevention and intervention programme for aggressive children and children at risk of developing aggressive behaviours. Although there is a substantial amount of anti-bullying programmes, they gave disappointingly small effect sizes. The neglectful practical significance could be attributed to the overly simplistic categorisation of individuals involved as bullies or victims. In the past three decades, the distinction between reactive and proactive aggression has been well-proved. As children displaying reactively aggressive behaviours have distinct social-information processing pattern with those showing proactively aggressive behaviours, it is critical to identify the unique needs of the two subtypes accordingly when designing an intervention. The onset of reactive aggression and proactive aggression was observed at earliest in 4.4 and 6.8 years old respectively. Such findings called for a differential early intervention targeting these high-risk children. However, to the best of the author’s knowledge, the author was the first to establish an evidence-based intervention programme against reactive and proactive aggression. With the largest samples in the world, the author, in the past 10 years, explored three different approaches and their effectiveness against aggression quantitatively and qualitatively with longitudinal design. The three approaches presented are (a) cognitive-behavioral approach, (b) moral education, with Chinese marital arts and ethics as the medium, and (c) bioneurological measures (omega-3 supplementation). The studies adopted a multi-informant approach with repeated measures before and after the intervention, and follow-up assessment. Participants were recruited from primary and secondary schools in Hong Kong. In the cognitive-behavioral approach, 66 reactive aggressors and 63 proactive aggressors, aged from 11 to 17, were identified from 10,096 secondary-school children with questionnaire and subsequent structured interview. Participants underwent 10 group sessions specifically designed for each subtype of aggressor. Results revealed significant declines in aggression levels from the baseline to the follow-up assessment after 1 year. In moral education through the Chinese martial arts, 315 high-risk aggressive children, aged 6 to 12 years, were selected from 3,511 primary-school children and randomly assigned into four types of 10-session intervention group, namely martial-skills-only, martial-ethics-only, both martial-skills-and-ethics, and physical fitness (placebo). Results showed only the martial-skills-and-ethics group had a significant reduction in aggression after treatment and 6 months after treatment comparing with the placebo group. In the bioneurological approach, 218 children, aged from 8 to 17, were randomly assigned to the omega-3 supplement group and the placebo group. Results revealed that compared with the placebo group, the omega-3 supplement group had significant declines in aggression levels at the 6-month follow-up assessment. All three approaches were effective in reducing proactive and reactive aggression. Traditionally, intervention programmes against aggressive behaviour often adapted the cognitive and/or behavioural approach. However, cognitive-behavioural approach for children was recently challenged by its demanding requirement of cognitive ability. Traditional cognitive interventions may not be as beneficial to an older population as in young children. The present study offered an insightful perspective in aggression reduction measures.

Keywords: intervention, outcome effectiveness, proactive aggression, reactive aggression

Procedia PDF Downloads 202
15 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 142
14 The Impact of Supporting Productive Struggle in Learning Mathematics: A Quasi-Experimental Study in High School Algebra Classes

Authors: Sumeyra Karatas, Veysel Karatas, Reyhan Safak, Gamze Bulut-Ozturk, Ozgul Kartal

Abstract:

Productive struggle entails a student's cognitive exertion to comprehend mathematical concepts and uncover solutions not immediately apparent. The significance of productive struggle in learning mathematics is accentuated by influential educational theorists, emphasizing its necessity for learning mathematics with understanding. Consequently, supporting productive struggle in learning mathematics is recognized as a high-leverage and effective mathematics teaching practice. In this study, the investigation into the role of productive struggle in learning mathematics led to the development of a comprehensive rubric for productive struggle pedagogy through an exhaustive literature review. The rubric consists of eight primary criteria and 37 sub-criteria, providing a detailed description of teacher actions and pedagogical choices that foster students' productive struggles. These criteria encompass various pedagogical aspects, including task design, tool implementation, allowing time for struggle, posing questions, scaffolding, handling mistakes, acknowledging efforts, and facilitating discussion/feedback. Utilizing this rubric, a team of researchers and teachers designed eight 90-minute lesson plans, employing a productive struggle pedagogy, for a two-week unit on solving systems of linear equations. Simultaneously, another set of eight lesson plans on the same topic, featuring identical content and problems but employing a traditional lecture-and-practice model, was designed by the same team. The objective was to assess the impact of supporting productive struggle on students' mathematics learning, defined by the strands of mathematical proficiency. This quasi-experimental study compares the control group, which received traditional lecture- and practice instruction, with the treatment group, which experienced a productive struggle in pedagogy. Sixty-six 10th and 11th-grade students from two algebra classes, taught by the same teacher at a high school, underwent either the productive struggle pedagogy or lecture-and-practice approach over two-week eight 90-minute class sessions. To measure students' learning, an assessment was created and validated by a team of researchers and teachers. It comprised seven open-response problems assessing the strands of mathematical proficiency: procedural and conceptual understanding, strategic competence, and adaptive reasoning on the topic. The test was administered at the beginning and end of the two weeks as pre-and post-test. Students' solutions underwent scoring using an established rubric, subjected to expert validation and an inter-rater reliability process involving multiple criteria for each problem based on their steps and procedures. An analysis of covariance (ANCOVA) was conducted to examine the differences between the control group, which received traditional pedagogy, and the treatment group, exposed to the productive struggle pedagogy, on the post-test scores while controlling for the pre-test. The results indicated a significant effect of treatment on post-test scores for procedural understanding (F(2, 63) = 10.47, p < .001), strategic competence (F(2, 63) = 9.92, p < .001), adaptive reasoning (F(2, 63) = 10.69, p < .001), and conceptual understanding (F(2, 63) = 10.06, p < .001), controlling for pre-test scores. This demonstrates the positive impact of supporting productive struggle in learning mathematics. In conclusion, the results revealed the significance of the role of productive struggle in learning mathematics. The study further explored the practical application of productive struggle through the development of a comprehensive rubric describing the pedagogy of supporting productive struggle.

Keywords: effective mathematics teaching practice, high school algebra, learning mathematics, productive struggle

Procedia PDF Downloads 22
13 Urban Ecosystem Health and Urban Agriculture

Authors: Mahbuba Kaneez Hasna

Abstract:

Introductory Statement outlining the background: Little has been written about political ecology of urban gardening, such as a network of knowledge generation, technologies of food production and distribution, food consumption practices, and the regulation of ‘agricultural activities. For urban food gardens to sustain as a long-term food security enterprise, we will need to better understand the anthropological, ecological, political, and institutional factors influencing their development, management, and ongoing viability. Significance of the study: Dhaka as one of the fastest growing city. There are currently no studies regards to Bangladesh on how urban slum dwellerscope with the changing urban environment in the city, where they overcome challenges, and how they cope with the urban ecological cycle of food and vegetable production. It is also essential to understand the importance of their access to confined spaces in the slums they apply their indigenous knowledge. These relationships in nature are important factors in community and conservation ecology. Until now, there has been no significant published academic work on relationships between urban and environmental anthropology, urban planning, geography, ecology, and social anthropology with a focus on urban agriculture and how this contributes to the moral economies, indigenous knowledge, and government policies in order to improve the lives and livelihoods of slum dwellers surrounding parks and open spaces in Dhaka, Bangladesh. Methodology: it have applied participant observation, semi-structured questionnaire-based interviews, and focus group discussions to collect social data. Interviews were conducted with the urban agriculture practitioners who are slum dwellers who carry out their urban agriculture activities. Some of the interviews were conducted with non-government organisations (NGOs) and local and state government officials, using semi-structured interviews. Using these methods developed a clearer understanding of how green space cultivation, local economic self-reliance, and urban gardening are producing distinctive urban ecologies in Dhaka and their policy-implications on urban sustainability. Major findings of the study: The research provided an in-depth knowledge on the challenges that slum dwellers encounter in establishing and maintaining urban gardens, such as the economic development of the city, conflicting political agendas, and environmental constraints in areas within which gardening activities take place. The research investigated (i) How do slum dwellers perform gardening practices from rural areas to open spaces in the city? (ii) How do men and women’s ethno-botanical knowledge contribute to urban biodiversity; (iii) And how do slum dwellers navigate complex constellations of land use policy, competing political agendas, and conflicting land and water tenures to meet livelihood functions provided by their gardens. Concluding statement: Lack of infrastructure facilities such as water supply and sanitation, micro-drains and waste disposal areas, and poor access to basic health care services increase the misery of people in the slum areas. Lack of environmental health awareness information for farmers, such as the risks from the use of chemical pesticides in gardens and from grazing animals in contaminated fields or cropping and planting trees or vegetable in contaminated dumping grounds, can all cause high health risk to humans and their environment.

Keywords: gender, urban agriculture, ecosystem health, urban slum systems

Procedia PDF Downloads 58
12 Young People and Their Parents Accessing Their Digital Health Data via a Patient Portal: The Ethical and Legal Implications

Authors: Pippa Sipanoun, Jo Wray, Kate Oulton, Faith Gibson

Abstract:

Background: With rapidly evolving digital health innovation, there is a need for digital health transformation that is accessible and sustainable, that demonstrates utility for all stakeholders while maintaining data safety. Great Ormond Street Hospital for Children aimed to future-proof the hospital by transitioning to an electronic patient record (EPR) system with a tethered patient portal (MyGOSH) in April 2019. MyGOSH patient portal enables patients 12 years or older (with their parent's consent) to access their digital health data. This includes access to results, documentation, and appointments that facilitate communication with their care team. As part of the Going Digital Study conducted between 2018-2021, data were collected from a sample of all relevant stakeholders before and after EPR and MyGOSH implementation. Data collection reach was wide and included the hospital legal and ethics teams. Aims: This study aims to understand the ethical and legal implications of young people and their parents accessing their digital health data. Methods: A focus group was conducted. Recruited participants were members of the Great Ormond Street Hospital Paediatric Bioethics Centre. Participants included expert and lay members from the Committee from a variety of professional or academic disciplines. Written informed consent was provided by all participants (n=7). The focus group was recorded, transcribed verbatim, and analyzed using thematic analysis. Results: Six themes were identified: access, competence and capacity - granting access to the system; inequalities in access resulting in inequities; burden, uncertainty and responding to change - managing expectations; documenting, risks and data safety; engagement, empowerment and understanding – how to use and manage personal information; legal considerations and obligations. Discussion: If healthcare professionals are to empower young people to be more engaged in their care, the importance of including them in decisions about their health is paramount, especially when they are approaching the age of becoming the consenter for treatment. Complexities exist in assessing competence or capacity when granting system access, when disclosing sensitive information, and maintaining confidentiality. Difficulties are also present in managing clinician burden, managing user expectations whilst providing an equitable service, and data management that meets professional and legal requirements. Conclusion: EPR and tethered-portal implementation at Great Ormond Street Hospital for Children was not only timely, due to the need for a rapid transition to remote consultations during the COVID-19 pandemic, which would not have been possible had EPR/MyGOSH not been implemented, but also integral to the digital health revolution required in healthcare today. This study is highly relevant in understanding the complexities around young people and their parents accessing their digital health data and, although the focus of this research related to portal use and access, the findings translate to young people in the wider digital health context. Ongoing support is required for all relevant stakeholders following MyGOSH patient portal implementation to navigate the ethical and legal complexities. Continued commitment is needed to balance the benefits and burdens, promote inclusion and equity, and ensure portal utility for patient benefit, whilst maintaining an individualized approach to care.

Keywords: patient portal, young people and their parents, ethical, legal

Procedia PDF Downloads 89
11 Family Photos as Catalysts for Writing: A Pedagogical Exercise in Visual Analysis with MA Students

Authors: Susana Barreto

Abstract:

This paper explores a pedagogical exercise that employs family photos as catalysts for teaching visual analysis and inspiring academic writing among MA students. The study aimed to achieve two primary objectives: to impart students with the skills of analyzing images or artifacts and to ignite their writing for research purposes. Conducted at Viana Polytechnic in Portugal, the exercise involved two classes on Arts Management and Art Education Master course comprising approximately twenty students from diverse academic backgrounds, including Economics, Design, Fine Arts, and Sociology, among others. The exploratory exercise involved selecting an old family photo, analyzing its content and context, and deconstructing the chosen images in an intuitive and systematic manner. Students were encouraged to engage in photo elicitation, seeking insights from family/friends to gain multigenerational perspectives on the images. The feedback received from this exercise was consistently positive, largely due to the personal connection students felt with the objects of analysis. Family photos, with their emotional significance, fostered deeper engagement and motivation in the learning process. Furthermore, visual analysing family photos stimulated critical thinking as students interpreted the composition, subject matter, and potential meanings embedded in the images. This practice enhanced their ability to comprehend complex visual representations and construct compelling visual narratives, thereby facilitating the writing process. The exercise also facilitated the identification of patterns, similarities, and differences by comparing different family photos, leading to a more comprehensive analysis of visual elements and themes. Throughout the exercise, students found analyzing their own photographs both enjoyable and insightful. They progressed through preliminary analysis, explored content and context, and artfully interwove these components. Additionally, students experimented with various techniques such as converting photos to black and white, altering framing angles, and adjusting sizes to unveil hidden meanings.The methodology employed included observation, documental analysis of written reports, and student interviews. By including students from diverse academic backgrounds, the study enhanced its external validity, enabling a broader range of perspectives and insights during the exercise. Furthermore, encouraging students to seek multigenerational perspectives from family and friends added depth to the analysis, enriching the learning experience and broadening the understanding of the cultural and historical context associated with the family photos Highlighting the emotional significance of these family photos and the personal connection students felt with the objects of analysis fosters a deeper connection to the subject matter. Moreover, the emphasis on stimulating critical thinking through the analysis of composition, subject matter, and potential meanings in family photos suggests a targeted approach to developing analytical skills. This improvement focuses specifically on critical thinking and visual analysis, enhancing the overall quality of the exercise. Additionally, the inclusion of a step where students compare different family photos to identify patterns, similarities, and differences further enhances the depth of the analysis. This comparative approach adds a layer of complexity to the exercise, ultimately leading to a more comprehensive understanding of visual elements and themes. The expected results of this study will culminate in a set of practical recommendations for implementing this exercise in academic settings.

Keywords: visual analysis, academic writing, pedagogical exercise, family photos

Procedia PDF Downloads 33
10 In-situ Mental Health Simulation with Airline Pilot Observation of Human Factors

Authors: Mumtaz Mooncey, Alexander Jolly, Megan Fisher, Kerry Robinson, Robert Lloyd, Dave Fielding

Abstract:

Introduction: The integration of the WingFactors in-situ simulation programme has transformed the education landscape at the Whittington Health NHS Trust. To date, there have been a total of 90 simulations - 19 aimed at Paediatric trainees, including 2 Child and Adolescent Mental Health (CAMHS) scenarios. The opportunity for joint debriefs provided by clinical faculty and airline pilots, has created a new exciting avenue to explore human factors within psychiatry. Through the use of real clinical environments and primed actors; the benefits of high fidelity simulation, interdisciplinary and interprofessional learning has been highlighted. The use of in-situ simulation within Psychiatry is a newly emerging concept and its success here has been recognised by unanimously positive feedback from participants and acknowledgement through nomination for the Health Service Journal (HSJ) Award (Best Education Programme 2021). Methodology: The first CAMHS simulation featured a collapsed patient in the toilet with a ligature tied around her neck, accompanied by a distressed parent. This required participants to consider:; emergency physical management of the case, alongside helping to contain the mother and maintaining situational awareness when transferring the patient to an appropriate clinical area. The second simulation was based on a 17- year- old girl attempting to leave the ward after presenting with an overdose, posing potential risk to herself. The safe learning environment enabled participants to explore techniques to engage the young person and understand their concerns, and consider the involvement of other members of the multidisciplinary team. The scenarios were followed by an immediate ‘hot’ debrief, combining technical feedback with Human Factors feedback from uniformed airline pilots and clinicians. The importance of psychological safety was paramount, encouraging open and honest contributions from all participants. Key learning points were summarized into written documents and circulated. Findings: The in-situ simulations demonstrated the need for practical changes both in the Emergency Department and on the Paediatric ward. The presence of airline pilots provided a novel way to debrief on Human Factors. The following key themes were identified: -Team-briefing (‘Golden 5 minutes’) - Taking a few moments to establish experience, initial roles and strategies amongst the team can reduce the need for conversations in front of a distressed patient or anxious relative. -Use of checklists / guidelines - Principles associated with checklist usage (control of pace, rigor, team situational awareness), instead of reliance on accurate memory recall when under pressure. -Read-back - Immediate repetition of safety critical instructions (e.g. drug / dosage) to mitigate the risks associated with miscommunication. -Distraction management - Balancing the risk of losing a team member to manage a distressed relative, versus it impacting on the care of the young person. -Task allocation - The value of the implementation of ‘The 5A’s’ (Availability, Address, Allocate, Ask, Advise), for effective task allocation. Conclusion: 100% of participants have requested more simulation training. Involvement of airline pilots has led to a shift in hospital culture, bringing to the forefront the value of Human Factors focused training and multidisciplinary simulation. This has been of significant value in not only physical health, but also mental health simulation.

Keywords: human factors, in-situ simulation, inter-professional, multidisciplinary

Procedia PDF Downloads 80
9 Biodegradation of Chlorophenol Derivatives Using Macroporous Material

Authors: Dmitriy Berillo, Areej K. A. Al-Jwaid, Jonathan L. Caplin, Andrew Cundy, Irina Savina

Abstract:

Chlorophenols (CPs) are used as a precursor in the production of higher CPs and dyestuffs, and as a preservative. Contamination by CPs of the ground water is located in the range from 0.15-100mg/L. The EU has set maximum concentration limits for pesticides and their degradation products of 0.1μg/L and 0.5μg/L, respectively. People working in industries which produce textiles, leather products, domestic preservatives, and petrochemicals are most heavily exposed to CPs. The International Agency for Research on Cancers categorized CPs as potential human carcinogens. Existing multistep water purification processes for CPs such as hydrogenation, ion exchange, liquid-liquid extraction, adsorption by activated carbon, forward and inverse osmosis, electrolysis, sonochemistry, UV irradiation, and chemical oxidation are not always cost effective and can cause the formation of even more toxic or mutagenic derivatives. Bioremediation of CPs derivatives utilizing microorganisms results in 60 to 100% decontamination efficiency and the process is more environmentally-friendly compared with existing physico-chemical methods. Microorganisms immobilized onto a substrate show many advantages over free bacteria systems, such as higher biomass density, higher metabolic activity, and resistance to toxic chemicals. They also enable continuous operation, avoiding the requirement for biomass-liquid separation. The immobilized bacteria can be reused several times, which opens the opportunity for developing cost-effective processes for wastewater treatment. In this study, we develop a bioremediation system for CPs based on macroporous materials, which can be efficiently used for wastewater treatment. Conditions for the preparation of the macroporous material from specific bacterial strains (Pseudomonas mendocina and Rhodococus koreensis) were optimized. The concentration of bacterial cells was kept constant; the difference was only the type of cross-linking agents used e.g. glutaraldehyde, novel polymers, which were utilized at concentrations of 0.5 to 1.5%. SEM images and rheology analysis of the material indicated a monolithic macroporous structure. Phenol was chosen as a model system to optimize the function of the cryogel material and to estimate its enzymatic activity, since it is relatively less toxic and harmful compared to CPs. Several types of macroporous systems comprising live bacteria were prepared. The viability of the cross-linked bacteria was checked using Live/Dead BacLight kit and Laser Scanning Confocal Microscopy, which revealed the presence of viable bacteria with the novel cross-linkers, whereas the control material cross-linked with glutaraldehyde(GA), contained mostly dead cells. The bioreactors based on bacteria were used for phenol degradation in batch mode at an initial concentration of 50mg/L, pH 7.5 and a temperature of 30°C. Bacterial strains cross-linked with GA showed insignificant ability to degrade phenol and for one week only, but a combination of cross-linking agents illustrated higher stability, viability and the possibility to be reused for at least five weeks. Furthermore, conditions for CPs degradation will be optimized, and the chlorophenol degradation rates will be compared to those for phenol. This is a cutting-edge bioremediation approach, which allows the purification of waste water from sustainable compounds without a separation step to remove free planktonic bacteria. Acknowledgments: Dr. Berillo D. A. is very grateful to Individual Fellowship Marie Curie Program for funding of the research.

Keywords: bioremediation, cross-linking agents, cross-linked microbial cell, chlorophenol degradation

Procedia PDF Downloads 190
8 Regional Metamorphism of the Loki Crystalline Massif Allochthonous Complex of the Caucasus

Authors: David Shengelia, Giorgi Chichinadze, Tamara Tsutsunava, Giorgi Beridze, Irakli Javakhishvili

Abstract:

The Loki pre-Alpine crystalline massif crops out within the Caucasus region. The massif basement is represented by the Upper Devonian gneissose quartz-diorites, the Lower-Middle Paleozoic metamorphic allochthonous complex, and different magmatites. Earlier, the metamorphic complex was considered as indivisible set represented by the series of different temperature metamorphits. The degree of metamorphism of separate parts of the complex is due to different formation conditions. This fact according to authors of the abstract was explained by the allochthonous-flaky structure of the complex. It was stated that the complex thrust over the gneissose quartz diorites before the intrusion of Sudetic granites. During the detailed mapping, the authors turned out that the metamorphism issues need to be reviewed and additional researches to be carried out. Investigations were accomplished by using the following methodologies: finding of key sections, a sampling of rocks, microscopic description of the material, analytical determination of elements in the rocks, microprobe analysis of minerals and new interpretation of obtained data. According to the author’s recent data within the massif four tectonic plates: Lower Gorastskali, Sapharlo-Lok-Jandari, Moshevani and “mélange” overthrust sheets have been mapped. They differ from each other by composition, the degree of metamorphism and internal structure. It is confirmed that the initial rocks of the tectonic plates formed in different geodynamic conditions during overthrusting due to tectonic compression form a thick tectonic sheet. Based on the detailed laboratory investigations additional mineral assemblages were established, temperature limits were specified, and a renewed trend of metamorphism facies and subfacies was elaborated. The results are the following: 1. The Lower Gorastskali overthrust sheet is a fragment of ophiolitic association corresponding to the Paleotethys oceanic crust. The main rock-forming minerals are carbonate, chlorite, spinel, epidote, clinoptilolite, plagioclase, hornblende, actinolite, hornblende, albite, serpentine, tremolite, talc, garnet, and prehnite. Regional metamorphism of rocks corresponds to the greenschist facies lowest stage. 2. The Sapharlo-Lok-Jandari overthrust sheet metapelites are represented by chloritoid, chlorite, phengite, muscovite, biotite, garnet, ankerite, carbonate, and quartz. Metabasites containing actinolite, chlorite, plagioclase, calcite, epidote, albite, actinolitic hornblende and hornblende are also present. The degree of metamorphism corresponds to the greenschist high-temperature chlorite, biotite, and low-temperature garnet subfacies. Later the rocks underwent the contact influence of Late Variscan granites. 3. The Moshevani overthrust sheet is represented mainly by metapelites and rarely by metabasites. Main rock-forming minerals of metapelites are muscovite, biotite, chlorite, quartz, andalusite, plagioclase, garnet and cordierite and of metabasites - plagioclase, green and blue-green hornblende, chlorite, epidote, actinolite, albite, and carbonate. Metamorphism level corresponds to staurolite-andalusite subfacies of staurolite facies and partially to facies of biotite muscovite gneisses and hornfelse facies as well. 4. The “mélange” overthrust sheet is built of different size rock fragments and blocks of Moshevani and Lower Gorastskali overthrust sheets. The degree of regional metamorphism of first and second overthrust sheets of the Loki massif corresponds to chlorite, biotite, and low-temperature garnet subfacies, but of the third overthrust sheet – to staurolite-andalusite subfacies of staurolite facies and partially to facies of biotite muscovite gneisses and hornfelse facies.

Keywords: regional metamorphism, crystalline massif, mineral assemblages, the Caucasus

Procedia PDF Downloads 137
7 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 128
6 The Development, Composition, and Implementation of Vocalises as a Method of Technical Training for the Adult Musical Theatre Singer

Authors: Casey Keenan Joiner, Shayna Tayloe

Abstract:

Classical voice training for the novice singer has long relied on the guidance and instruction of vocalise collections, such as those written and compiled by Marchesi, Lütgen, Vaccai, and Lamperti. These vocalise collections purport to encourage healthy vocal habits and instill technical longevity in both aspiring and established singers, though their scope has long been somewhat confined to the classical idiom. For pedagogues and students specializing in other vocal genres, such as musical theatre and CCM (contemporary commercial music,) low-impact and pertinent vocal training aids are in short supply, and much of the suggested literature derives from classical methodology. While the tenants of healthy vocal production remain ubiquitous, specific stylistic needs and technical emphases differ from genre to genre and may require a specified extension of vocal acuity. As musical theatre continues to grow in popularity at both the professional and collegiate levels, the need for specialized training grows as well. Pedagogical literature geared specifically towards musical theatre (MT) singing and vocal production, while relatively uncommon, is readily accessible to the contemporary educator. Practitioners such as Norman Spivey, Mary Saunders Barton, Claudia Friedlander, Wendy Leborgne, and Marci Rosenberg continue to publish relevant research in the field of musical theatre voice pedagogy and have successfully identified many common MT vocal faults, their subsequent diagnoses, and their eventual corrections. Where classical methodology would suggest specific vocalises or training exercises to maintain corrected vocal posture following successful fault diagnosis, musical theatre finds itself without a relevant body of work towards which to transition. By analyzing the existing vocalise literature by means of a specialized set of parameters, including but not limited to melodic variation, rhythmic complexity, vowel utilization, and technical targeting, we have composed a set of vocalises meant specifically to address the training and conditioning of adult musical theatre voices. These vocalises target many pedagogical tenants in the musical theatre genre, including but not limited to thyroarytenoid-dominant production, twang resonance, lateral vowel formation, and “belt-mix.” By implementing these vocalises in the musical theatre voice studio, pedagogues can efficiently communicate proper musical theatre vocal posture and kinesthetic connection to their students, regardless of age or level of experience. The composition of these vocalises serves MT pedagogues on both a technical level as well as a sociological one. MT is a relative newcomer on the collegiate stage and the academization of musical theatre methodologies has been a slow and arduous process. The conflation of classical and MT techniques and training methods has long plagued the world of voice pedagogy and teachers often find themselves in positions of “cross-training,” that is, teaching students of both genres in one combined voice studio. As MT continues to establish itself on academic platforms worldwide, genre-specific literature and focused studies are both rare and invaluable. To ensure that modern students receive exacting and definitive training in their chosen fields, it becomes increasingly necessary for genres such as musical theatre to boast specified literature and a collection of musical theatre-specific vocalises only aids in this effort. This collection of musical theatre vocalises is the first of its kind and provides genre-specific studios with a basis upon which to grow healthy, balanced voices built for the harsh conditions of the modern theatre stage.

Keywords: voice pedagogy, targeted methodology, musical theatre, singing

Procedia PDF Downloads 127
5 Machine Learning Based Digitalization of Validated Traditional Cognitive Tests and Their Integration to Multi-User Digital Support System for Alzheimer’s Patients

Authors: Ramazan Bakir, Gizem Kayar

Abstract:

It is known that Alzheimer and Dementia are the two most common types of Neurodegenerative diseases and their visibility is getting accelerated for the last couple of years. As the population sees older ages all over the world, researchers expect to see the rate of this acceleration much higher. However, unfortunately, there is no known pharmacological cure for both, although some help to reduce the rate of cognitive decline speed. This is why we encounter with non-pharmacological treatment and tracking methods more for the last five years. Many researchers, including well-known associations and hospitals, lean towards using non-pharmacological methods to support cognitive function and improve the patient’s life quality. As the dementia symptoms related to mind, learning, memory, speaking, problem-solving, social abilities and daily activities gradually worsen over the years, many researchers know that cognitive support should start from the very beginning of the symptoms in order to slow down the decline. At this point, life of a patient and caregiver can be improved with some daily activities and applications. These activities include but not limited to basic word puzzles, daily cleaning activities, taking notes. Later, these activities and their results should be observed carefully and it is only possible during patient/caregiver and M.D. in-person meetings in hospitals. These meetings can be quite time-consuming, exhausting and financially ineffective for hospitals, medical doctors, caregivers and especially for patients. On the other hand, digital support systems are showing positive results for all stakeholders of healthcare systems. This can be observed in countries that started Telemedicine systems. The biggest potential of our system is setting the inter-user communication up in the best possible way. In our project, we propose Machine Learning based digitalization of validated traditional cognitive tests (e.g. MOCA, Afazi, left-right hemisphere), their analyses for high-quality follow-up and communication systems for all stakeholders. R. Bakir and G. Kayar are with Gefeasoft, Inc, R&D – Software Development and Health Technologies company. Emails: ramazan, gizem @ gefeasoft.com This platform has a high potential not only for patient tracking but also for making all stakeholders feel safe through all stages. As the registered hospitals assign corresponding medical doctors to the system, these MDs are able to register their own patients and assign special tasks for each patient. With our integrated machine learning support, MDs are able to track the failure and success rates of each patient and also see general averages among similarly progressed patients. In addition, our platform also supports multi-player technology which helps patients play with their caregivers so that they feel much safer at any point they are uncomfortable. By also gamifying the daily household activities, the patients will be able to repeat their social tasks and we will provide non-pharmacological reminiscence therapy (RT – life review therapy). All collected data will be mined by our data scientists and analyzed meaningfully. In addition, we will also add gamification modules for caregivers based on Naomi Feil’s Validation Therapy. Both are behaving positively to the patient and keeping yourself mentally healthy is important for caregivers. We aim to provide a therapy system based on gamification for them, too. When this project accomplishes all the above-written tasks, patients will have the chance to do many tasks at home remotely and MDs will be able to follow them up very effectively. We propose a complete platform and the whole project is both time and cost-effective for supporting all stakeholders.

Keywords: alzheimer’s, dementia, cognitive functionality, cognitive tests, serious games, machine learning, artificial intelligence, digitalization, non-pharmacological, data analysis, telemedicine, e-health, health-tech, gamification

Procedia PDF Downloads 108
4 Feasibility and Acceptability of an Emergency Department Digital Pain Self-Management Intervention: An Randomized Controlled Trial Pilot Study

Authors: Alexandria Carey, Angela Starkweather, Ann Horgas, Hwayoung Cho, Jason Beneciuk

Abstract:

Background/Significance: Over 3.4 million acute axial low back pain (aLBP) cases are treated annually in the United States (US) emergency departments (ED). ED patients with aLBP receive varying verbal and written discharge routine care (RC), leading to ineffective patient self-management. Ineffective self-management increase chronic low back pain (cLPB) transition risks, a chief cause of worldwide disability, with associated costs >$60 million annually. This research addresses this significant problem by evaluating an ED digital pain self-management intervention (EDPSI) focused on improving self-management through improved knowledge retainment, skills, and self-efficacy (confidence) (KSC) thus reducing aLBP to cLBP transition in ED patients discharged with aLBP. The research has significant potential to increase self-efficacy, one of the most potent mechanisms of behavior change and improve health outcomes. Focusing on accessibility and usability, the intervention may reduce discharge disparities in aLBP self-management, especially with low health literacy. Study Questions: This research will answer the following questions: 1) Will an EDPSI focused on improving KSC progress patient self-management behaviors and health status?; 2) Is the EDPSI sustainable to improve pain severity, interference, and pain recurrence?; 3) Will an EDPSI reduce aLBP to cLBP transition in patients discharged with aLBP? Aims: The pilot randomized-controlled trial (RCT) study’s objectives assess the effects of a 12-week digital self-management discharge tool in patients with aLBP. We aim to 1) Primarily assess the feasibility [recruitment, enrollment, and retention], and [intervention] acceptability, and sustainability of EDPSI on participant’s pain self-management; 2) Determine the effectiveness and sustainability of EDPSI on pain severity/interference among participants. 3) Explore patient preferences, health literacy, and changes among participants experiencing the transition to cLBP. We anticipate that EDPSI intervention will increase likelihood of achieving self-management milestones and significantly improve pain-related symptoms in aLBP. Methods: The study uses a two-group pilot RCT to enroll 30 individuals who have been seen in the ED with aLBP. Participants are randomized into RC (n=15) or RC + EDPSI (n=15) and receive follow-up surveys for 12-weeks post-intervention. EDPSI innovative content focuses on 1) highlighting discharge education; 2) provides self-management treatment options; 3) actor demonstration of ergonomics, range of motion movements, safety, and sleep; 4) complementary alternative medicine (CAM) options including acupuncture, yoga, and Pilates; 5) combination therapies including thermal application, spinal manipulation, and PT treatments. The intervention group receives Booster sessions via Zoom to assess and reinforce their knowledge retention of techniques and provide return demonstration reinforcing ergonomics, in weeks two and eight. Outcome Measures: All participants are followed for 12-weeks, assessing pain severity/ interference using the Brief Pain Inventory short-form (BPI-sf) survey, self-management (measuring KSC) using the short 13-item Patient Activation Measure (PAM), and self-efficacy using the Pain Self-Efficacy Questionnaire (PSEQ) weeks 1, 6, and 12. Feasibility is measured by recruitment, enrollment, and retention percentages. Acceptability and education satisfaction are measured using the Education-Preference and Satisfaction Questionnaire (EPSQ) post-intervention. Self-management sustainment is measured including PSEQ, PAM, and patient satisfaction and healthcare utilization (PSHU) requesting patient overall satisfaction, additional healthcare utilization, and pain management related to continued back pain or complications post-injury.

Keywords: digital, pain self-management, education, tool

Procedia PDF Downloads 6
3 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 196
2 Development of an Omaha System-Based Remote Intervention Program for Work-Related Musculoskeletal Disorders (WMSDs) Among Front-Line Nurses

Authors: Tianqiao Zhang, Ye Tian, Yanliang Yin, Yichao Tian, Suzhai Tian, Weige Sun, Shuhui Gong, Limei Tang, Ruoliang Tang

Abstract:

Introduction: Healthcare workers, especially the nurses all over the world, are highly vulnerable to work-related musculoskeletal disorders (WMSDs), experiencing high rates of neck, shoulder, and low back injuries, due to the unfavorable working conditions. To reduce WMSDs among nursing personnel, many workplace interventions have been developed and implemented. Unfortunately, the ongoing Covid-19 (SARS-CoV-2) pandemic has posed great challenges to the ergonomic practices and interventions in healthcare facilities, particularly the hospitals, since current Covid-19 mitigation measures, such as social distancing and working remotely, has substantially minimized in-person gatherings and trainings. On the other hand, hospitals throughout the world have been short-staffed, resulting in disturbance of shift scheduling and more importantly, the increased job demand among the available caregivers, particularly the doctors and nurses. With the latest development in communication technology, remote intervention measures have been developed as an alternative, without the necessity of in-person meetings. The Omaha System (OS) is a standardized classification system for nursing practices, including a problem classification system, an intervention system, and an outcome evaluation system. This paper describes the development of an OS-based ergonomic intervention program. Methods: First, a comprehensive literature search was performed among worldwide electronic databases, including PubMed, Web of Science, Cochrane Library, China National Knowledge Infrastructure (CNKI), between journal inception to May 2020, resulting in a total of 1,418 scientific articles. After two independent screening processes, the final knowledge pool included eleven randomized controlled trial studies to develop the draft of the intervention program with Omaha intervention subsystem as the framework. After the determination of sample size needed for statistical power and the potential loss to follow-up, a total of 94 nurses from eight clinical departments agreed to provide written, informed consent to participate in the study, which were subsequently assigned into two random groups (i.e., intervention vs. control). A subgroup of twelve nurses were randomly selected to participate in a semi-structured interview, during which their general understanding and awareness of musculoskeletal disorders and potential interventions was assessed. Then, the first draft was modified to reflect the findings from these interviews. Meanwhile, the tentative program schedule was also assessed. Next, two rounds of consultation were conducted among experts in nursing management, occupational health, psychology, and rehabilitation, to further adjust and finalize the intervention program. The control group had access to all the information and exercise modules at baseline, while an interdisciplinary research team was formed and supervised the implementation of the on-line intervention program through multiple social media groups. Outcome measures of this comparative study included biomechanical load assessed by the Quick Exposure Check and stresses due to awkward body postures. Results and Discussion: Modification to the draft included (1) supplementing traditional Chinese medicine practices, (2) adding the use of assistive patient handling equipment, and (3) revising the on-line training method. Information module should be once a week, lasting about 20 to 30 minutes, for a total of 6 weeks, while the exercise module should be 5 times a week, each lasting about 15 to 20 minutes, for a total of 6 weeks.

Keywords: ergonomic interventions, musculoskeletal disorders (MSDs), omaha system, nurses, Covid-19

Procedia PDF Downloads 133