Search results for: reception of classical music
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1668

Search results for: reception of classical music

138 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis

Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin

Abstract:

With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.

Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism

Procedia PDF Downloads 266
137 The Role of Rapid Maxillary Expansion in Managing Obstructive Sleep Apnea in Children: A Literature Review

Authors: Suleman Maliha, Suleman Sidra

Abstract:

Obstructive sleep apnea (OSA) is a sleep disorder that can result in behavioral and psychomotor impairments in children. The classical treatment modalities for OSA have been continuous positive airway pressure and adenotonsillectomy. However, orthodontic intervention through rapid maxillary expansion (RME) has also been commonly used to manage skeletal transverse maxillary discrepancies. Aim and objectives: The aim of this study is to determine the efficacy of rapid maxillary expansion in paediatric patients with obstructive sleep apnea by assessing pre and post-treatment mean apnea-hypopnea index (AHI) and oxygen saturations. Methodology: Literature was identified through a rigorous search of the Embase, Pubmed, and CINAHL databases. Articles published from 2012 onwards were selected. The inclusion criteria consisted of patients aged 18 years and under with no systemic disease, adenotonsillar surgery, or hypertrophy who are undergoing RME with AHI measurements before and after treatment. In total, six suitable papers were identified. Results: Three studies assessed patients pre and post-RME at 12 months. The first study consisted of 15 patients with an average age of 7.5 years. Following treatment, they found that RME resulted in both higher oxygen saturations (+ 5.3%) and improved AHI (- 4.2 events). The second study assessed 11 patients aged 5–8 years and also noted improvements, with mean AHI reduction from 6.1 to 2.4 and oxygen saturations increasing from 93.1% to 96.8%. The third study reviewed 14 patients aged 6–9 years and similarly found an AHI reduction from 5.7 to 4.4 and an oxygen saturation increase from 89.8% to 95.5%. All modifications noted in these studies were statistically significant. A long-term study reviewed 23 patients aged 6–12 years post-RME treatment on an annual basis for 12 years. They found that the mean AHI reduced from 12.2 to 0.4, with improved oxygen saturations from 78.9% to 95.1%. Another study assessed 19 patients aged 9-12 years at two months into RME and four months post-treatment. Improvements were also noted at both stages, with an overall reduction of the mean AHI from 16.3 to 0.8 and an overall increase in oxygen saturations from 77.9% to 95.4%. The final study assessed 26 children aged 7-11 years on completion of individual treatment and found an AHI reduction from 6.9 to 5.3. However, the oxygen saturation remained stagnant at 96.0%, but this was not clinically significant. Conclusion: Overall, the current evidence suggests that RME is a promising treatment option for paediatric patients with OSA. It can provide efficient and conservative treatment; however, early diagnosis is crucial. As there are various factors that could be contributing to OSA, it is important that each case is treated on its individual merits. Going forward, there is a need for more randomized control trials with larger cohorts being studied. Research into the long-term effects of RME and potential relapse amongst cases would also be useful.

Keywords: orthodontics, sleep apnea, maxillary expansion, review

Procedia PDF Downloads 82
136 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach

Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes

Abstract:

Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.

Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux

Procedia PDF Downloads 167
135 Photovoltaic-Driven Thermochemical Storage for Cooling Applications to Be Integrated in Polynesian Microgrids: Concept and Efficiency Study

Authors: Franco Ferrucci, Driss Stitou, Pascal Ortega, Franck Lucas

Abstract:

The energy situation in tropical insular regions, as found in the French Polynesian islands, presents a number of challenges, such as high dependence on imported fuel, high transport costs from the mainland and weak electricity grids. Alternatively, these regions have a variety of renewable energy resources, which favor the exploitation of smart microgrids and energy storage technologies. With regards to the electrical energy demand, the high temperatures in these regions during the entire year implies that a large proportion of consumption is used for cooling buildings, even during the evening hours. In this context, this paper presents an air conditioning system driven by photovoltaic (PV) electricity that combines a refrigeration system and a thermochemical storage process. Thermochemical processes are able to store energy in the form of chemical potential with virtually no losses, and this energy can be used to produce cooling during the evening hours without the need to run a compressor (thus no electricity is required). Such storage processes implement thermochemical reactors in which a reversible chemical reaction between a solid compound and a gas takes place. The solid/gas pair used in this study is BaCl2 reacting with ammonia (NH3), which is also the coolant fluid in the refrigeration circuit. In the proposed system, the PV-driven electric compressor is used during the daytime either to run the refrigeration circuit when a cooling demand occurs or to decompose the ammonia-charged salt and remove the gas from thermochemical reactor when no cooling is needed. During the evening, when there is no electricity from solar source, the system changes its configuration and the reactor reabsorbs the ammonia gas from the evaporator and produces the cooling effect. In comparison to classical PV-driven air conditioning units equipped with electrochemical batteries (e.g. Pb, Li-ion), the proposed system has the advantage of having a novel storage technology with a much longer charge/discharge life cycle, and no self-discharge. It also allows a continuous operation of the electric compressor during the daytime, thus avoiding the problems associated with the on-off cycling. This work focuses on the system concept and on the efficiency study of its main components. It also compares the thermochemical with electrochemical storage as well as with other forms of thermal storage, such as latent (ice) and sensible heat (chilled water). The preliminary results show that the system seems to be a promising alternative to simultaneously fulfill cooling and energy storage needs in tropical insular regions.

Keywords: microgrid, solar air-conditioning, solid/gas sorption, thermochemical storage, tropical and insular regions

Procedia PDF Downloads 241
134 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 202
133 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 109
132 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 571
131 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 283
130 Contribution at Dimensioning of the Energy Dissipation Basin

Authors: M. Aouimeur

Abstract:

The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.

Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment

Procedia PDF Downloads 584
129 Ecological and Historical Components of the Cultural Code of the City of Florence as Part of the Edutainment Project Velonotte International

Authors: Natalia Zhabo, Sergey Nikitin, Marina Avdonina, Mariya Nikitina

Abstract:

The analysis of the activities of one of the events of the international educational and entertainment project Velonotte is provided: an evening bicycle tour with children around Florence. The aim of the project is to develop methods and techniques for increasing the sensitivity of the cycling participants and listeners of the radio broadcasts to the treasures of the national heritage, in this case, to the historical layers of the city and the ecology of the Renaissance epoch. The block of educational tasks is considered, and the issues of preserving the identity of the city are discussed. Methods. The Florentine event was prepared during more than a year. First of all the creative team selected such events of the history of the city which seem to be important for revealing the specifics of the city, its spirit - from antiquity to our days – including the forums of Internet with broad public opinion. Then a route (seven kilometers) was developed, which was proposed to the authorities and organizations of the city. The selection of speakers was conducted according to several criteria: they should be authors of books, famous scientists, connoisseurs in a certain sphere (toponymy, history of urban gardens, art history), capable and willing to talk with participants directly at the points of stops, in order to make a dialogue and so that performances could be organized with their participation. The music was chosen for each part of the itinerary to prepare the audience emotionally. Cards for coloring with images of the main content of each stop were created for children. A site was done to inform the participants and to keep photos, videos and the audio files with speakers’ speech afterward. Results: Held in April 2017, the event was dedicated to the 640th Anniversary of the Filippo Brunelleschi, Florentine architect, and to the 190th anniversary of the publication of Florence guide by Stendhal. It was supported by City of Florence and Florence Bike Festival. Florence was explored to transfer traditional elements of culture, sometimes unfairly forgotten from ancient times to Brunelleschi and Michelangelo and Tschaikovsky and David Bowie with lectures by professors of Universities. Memorable art boards were installed in public spaces. Elements of the cultural code are deeply internalized in the minds of the townspeople, the perception of the city in everyday life and human communication is comparable to such fundamental concepts of the self-awareness of the townspeople as mental comfort and the level of happiness. The format of a fun and playful walk with the ICT support gives new opportunities for enriching the city's cultural code of each citizen with new components, associations, connotations.

Keywords: edutainment, cultural code, cycling, sensitization Florence

Procedia PDF Downloads 221
128 Expanding the Atelier: Design Lead Academic Project Using Immersive User-Generated Mobile Images and Augmented Reality

Authors: David Sinfield, Thomas Cochrane, Marcos Steagall

Abstract:

While there is much hype around the potential and development of mobile virtual reality (VR), the two key critical success factors are the ease of user experience and the development of a simple user-generated content ecosystem. Educational technology history is littered with the debris of over-hyped revolutionary new technologies that failed to gain mainstream adoption or were quickly superseded. Examples include 3D television, interactive CDROMs, Second Life, and Google Glasses. However, we argue that this is the result of curriculum design that substitutes new technologies into pre-existing pedagogical strategies that are focused upon teacher-delivered content rather than exploring new pedagogical strategies that enable student-determined learning or heutagogy. Visual Communication design based learning such as Graphic Design, Illustration, Photography and Design process is heavily based on the traditional forms of the classroom environment whereby student interaction takes place both at peer level and indeed teacher based feedback. In doing so, this makes for a healthy creative learning environment, but does raise other issue in terms of student to teacher learning ratios and reduced contact time. Such issues arise when students are away from the classroom and cannot interact with their peers and teachers and thus we see a decline in creative work from the student. Using AR and VR as a means of stimulating the students and to think beyond the limitation of the studio based classroom this paper will discuss the outcomes of a student project considering the virtual classroom and the techniques involved. The Atelier learning environment is especially suited to the Visual Communication model as it deals with the creative processing of ideas that needs to be shared in a collaborative manner. This has proven to have been a successful model over the years, in the traditional form of design education, but has more recently seen a shift in thinking as we move into a more digital model of learning and indeed away from the classical classroom structure. This study focuses on the outcomes of a student design project that employed Augmented Reality and Virtual Reality technologies in order to expand the dimensions of the classroom beyond its physical limits. Augmented Reality when integrated into the learning experience can improve the learning motivation and engagement of students. This paper will outline some of the processes used and the findings from the semester-long project that took place.

Keywords: augmented reality, blogging, design in community, enhanced learning and teaching, graphic design, new technologies, virtual reality, visual communications

Procedia PDF Downloads 239
127 The 4th Critical R: Conceptualising the Development of Resilience as an Addition to the 3 Rs of the Essential Education Curricula

Authors: Akhentoolove Corbin, Leta De Jonge, Charmaine De Jonge

Abstract:

Introduction: Various writers have promoted the adoption of the 4th R in the education curricula (relationships, respect, reasoning, religion, computing, science, art, conflict management, music) and the 5th R (responsibility). They argue that the traditional 3 Rs are not adequate for the modern environment and the requirements for students to become functional citizens in society. In particular, the developing countries of the anglophone Caribbean (most of which are tiny islands) are susceptible to the dangers and complexities of climate change and global economic volatility. These proposed additions to the 3Rs do have some justification, but this research considers Resilience as even more important and relevant in a world that is faced with the negative prospects of climate change, poverty, discrimination, and economic volatility. It is argued that the foundation for resilient citizens, workers, and workplaces, must be built in the elementary and secondary/middle schools and then through the tertiary level, to achieve an outcome of more resilient students. Government, business, and society require widespread resilience to be capable of ‘bouncing back’ and be more adaptable, transformational, and sustainable. Methodology: The paper utilises a mixed-methods approach incorporating a questionnaire and interviews to determine participants’ opinions on the importance and relevance of resilience in the schools’ curricula and to government, business, and society. The target groups are as follows: educators at all levels, education administrators, members of the business sector, public sector, and 3rd sector. The research specifically targets the anglophone Caribbean developing countries (Barbados, Guyana, Jamaica, Trinidad, St. Lucia, and St Vincent, and the Grenadines). The research utilises SPSS for data analysis. Major Findings: The preliminary findings suggest that the majority of participants support the adoption of resilience as a 4th R in the curricula of the elementary, secondary/middle schools, and tertiary level in the anglophone Caribbean. The final results will allow the researchers to reveal more specific details on any variations among the islands in the sample andto engage in an in-depth discussion of the relevance and importance of resilience as the 4th R. Conclusion: Results seem to suggest that the education system should adopt the 4th R of resilience so that educators working in collaboration with the family and community/village can develop young citizens who are more resilient and capable of manifesting the behaviours and attitudes associated with ‘bouncing back,’ adaptability, transformation, and sustainability. These findings may be useful for education decision-makers and governments in these Caribbean islands, who have the authority and responsibility for the development of education policy, laws, and regulations.

Keywords: education, resilient students, adaptable, transformational, resilient citizens, workplaces, government

Procedia PDF Downloads 70
126 Sensory Interventions for Dementia: A Review

Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo

Abstract:

Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.

Keywords: sensory interventions, dementia, scoping review

Procedia PDF Downloads 135
125 The Cultural Shift in Pre-owned Fashion as Sustainable Consumerism in Vietnam

Authors: Lam Hong Lan

Abstract:

The textile industry is said to be the second-largest polluter, responsible for 92 million tonnes of waste annually. There is an urgent need to practice the circular economy to increase the use and reuse around the world. By its nature, the pre-owned fashion business is considered part of the circular economy as it helps to eliminate waste and circulate products. Second-hand clothes and accessories used to be associated with a ‘cheap image’ that carried ‘old energy’ in Vietnam. This perception has been shifted, especially amongst the younger generation. Vietnamese consumer is spending more on products and services that increase self-esteem. The same consumer is moving away from a collectivist social identity towards a ‘me, not we’ outlook as they look for a way to express their individual identity. And pre-owned fashion is one of their solutions as it values money, can create a unique personal style for the wearer and links with sustainability. The design of this study is based on the second-hand shopping motivation theory. A semi-structured online survey with 100 consumers from one pre-owned clothing community and one pre-owned e-commerce site in Vietnam. The findings show that in contrast with Vietnamese older consumers (55+yo) who, in the previous study, generally associated pre-owned fashion with ‘low-cost’, ‘cheap image’ that carried ‘old energy’, young customers (20-30 yo) were actively promoted their pre-owned fashion items to the public via outlet’s social platforms and their social media. This cultural shift comes from the impact of global and local discourse around sustainable fashion and the growth of digital platforms in the pre-owned fashion business in the last five years, which has generally supported wider interest in pre-owned fashion in Vietnam. It can be summarised in three areas: (1) global and local celebrity influencers. A number of celebrities have been photographed wearing vintage items in music videos, photoshoots or at red carpet events. (2) E-commerce and intermediaries. International e-commerce sites – e.g., Vinted, TheRealReal – and/or local apps – e.g., Re.Loved – can influence attitudes and behaviors towards pre-owned consumption. (3) Eco-awareness. The increased online coverage of climate change and environmental pollution has encouraged customers to adopt a more eco-friendly approach to their wardrobes. While sustainable biomaterials and designs are still navigating their way into sustainability, sustainable consumerism via pre-owned fashion seems to be an immediate solution to lengthen the clothes lifecycle. This study has found that young consumers are primarily seeking value for money and/or a unique personal style from pre-owned/vintage fashion while using these purchases to promote their own “eco-awareness” via their social media networks. This is a good indication for fashion designers to keep in mind in their design process and for fashion enterprises in their business model’s choice to not overproduce fashion items.

Keywords: cultural shift, pre-owned fashion, sustainable consumption, sustainable fashion.

Procedia PDF Downloads 84
124 Shiite and Secular Approaches to Gender Minorities: A Comparative Study of Iran, Turkey, and Germany

Authors: Morteza Azimi

Abstract:

The demand for recognition among LGBTQIA+ groups has grown significantly in modern times, particularly since the second half of the twentieth century, when human rights discourse became increasingly prominent, especially in the West. In contrast, the classic readings of the Quran and Hadith, whose roots lie in pre-modern times, and the Shiite Figh (Islamic jurisprudence) seem not to be updated and responsive to the need for recognition by gender minority identities. Moreover, the recognition of such minority identities within Shiite Islam and its intersection with secular frameworks remains an underexplored topic. This paper explores what Islamic texts, such as the Quran, Hadith, and Shiite Fiqh, address regarding the recognition and rights of gender minorities. It further examines the Islamic Republic of Iran as an example of a dominant Shiite political system, comparing it with Turkey and Germany as secular models. While Turkey, a secular state, is deeply influenced by its predominantly Muslim population and culture, Germany represents a Western model characterized by the widespread recognition of LGBTQIA+ rights. The rationale for this comparative approach lies in understanding how different political systems influence the recognition of gender minorities. Moreover, the study investigates whether Shiite Islamic frameworks can provide solutions to these demands or whether secular systems, as exemplified by Turkey and Germany, are more effective in addressing issues of gender minorities. Hence, this study offers a novel perspective by juxtaposing Shiite Islamic textual interpretations with secular legal frameworks to explore the evolving recognition of gender minorities, demonstrating how varying political and cultural contexts shape the lived experiences of LGBTQIA+ individuals in Iran, Turkey, and Germany. This research relies on secondary literature as the primary data source, especially regarding the issue of gender in Shiite Islamic texts. The author employs a comparative textual analysis of Shiite Islamic texts (e.g., Quran, Hadith, and Fiqh) and secular legal frameworks in Turkey and Germany to explore how different systems address the recognition of gender minorities. Findings reveal that classical interpretations of Islamic texts and Shiite Fiqh employed by the Islamic Republic of Iran fail to provide laws and frameworks that recognize LGBTQIA+ identities. This gap contributes to the marginalization of gender minority identities, fostering environments of suppression, violence, and exclusion. The findings of this study could inform policymaking and advocacy efforts by shedding light on the necessity of a change toward inclusive legal and cultural frameworks for gender minorities in Muslim countries like Iran.

Keywords: gender minorities, LGBTQIA+ recognition, shiite islam, comparative analysis

Procedia PDF Downloads 7
123 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 232
122 Multimodal Rhetoric in the Wildlife Documentary, “My Octopus Teacher”

Authors: Visvaganthie Moodley

Abstract:

While rhetoric goes back as far as Aristotle who focalised its meaning as the “art of persuasion”, most scholars have focused on elocutio and dispositio canons, neglecting the rhetorical impact of multimodal texts, such as documentaries. Film documentaries are being increasingly rhetoric, often used by wildlife conservationists for influencing people to become more mindful about humanity’s connection with nature. This paper examines the award-winning film documentary, “My Octopus Teacher”, which depicts naturalist, Craig Foster’s unique discovery and relationship with a female octopus in the southern tip of Africa, the Cape of Storms in South Africa. It is anchored in Leech and Short’s (2007) framework of linguistic and stylistic categories – comprising lexical items, grammatical features, figures of speech and other rhetoric features, and cohesiveness – with particular foci on diction, anthropomorphic language, metaphors and symbolism. It also draws on Kress and van Leeuwen’s (2006) multimodal analysis to show how verbal cues (the narrator’s commentary), visual images in motion, visual images as metaphors and symbolism, and aural sensory images such as music and sound synergise for rhetoric effect. In addition, the analysis of “My Octopus Teacher” is guided by Nichol’s (2010) narrative theory; features of a documentary which foregrounds the credibility of the narrative as a text that represents real events with real people; and its modes of construction, viz., the poetic mode, the expository mode, observational mode and participatory mode, and their integration – forging documentaries as multimodal texts. This paper presents a multimodal rhetoric discussion on the sequence of salient episodes captured in the slow moving one-and-a-half-hour documentary. These are: (i) The prologue: on the brink of something extraordinary; (ii) The day it all started; (iii) The narrator’s turmoil: getting back into the ocean; (iv) The incredible encounter with the octopus; (v) Establishing a relationship; (vi) Outwitting the predatory pyjama shark; (vii) The cycle of life; and (viii) The conclusion: lessons from an octopus. The paper argues that wildlife documentaries, characterized by plausibility and which provide researchers the lens to examine the ideologies about animals and humans, offer an assimilation of the various senses – vocal, visual and audial – for engaging viewers in stylized compelling way; they have the ability to persuade people to think and act in particular ways. As multimodal texts, with its use of lexical items; diction; anthropomorphic language; linguistic, visual and aural metaphors and symbolism; and depictions of anthropocentrism, wildlife documentaries are powerful resources for promoting wildlife conservation and conscientizing people of the need for establishing a harmonious relationship with nature and humans alike.

Keywords: documentaries, multimodality, rhetoric, style, wildlife, conservation

Procedia PDF Downloads 95
121 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 149
120 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics

Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier

Abstract:

Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.

Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing

Procedia PDF Downloads 22
119 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 82
118 Relationship between Structure of Some Nitroaromatic Pollutants and Their Degradation Kinetic Parameters in UV-VIS/TIO2 System

Authors: I. Nitoi, P. Oancea, M. Raileanu, M. Crisan, L. Constantin, I. Cristea

Abstract:

Hazardous organic compounds like nitroaromatics are frequently found in chemical and petroleum industries discharged effluents. Due to their bio-refractory character and high chemical stability cannot be efficiently removed by classical biological or physical-chemical treatment processes. In the past decades, semiconductor photocatalysis has been frequently applied for the advanced degradation of toxic pollutants. Among various semiconductors titania was a widely studied photocatalyst, due to its chemical inertness, low cost, photostability and nontoxicity. In order to improve optical absorption and photocatalytic activity of TiO2 many attempts have been made, one feasible approach consists of doping oxide semiconductor with metal. The degradation of dinitrobenzene (DNB) and dinitrotoluene (DNT) from aqueous solution under UVA-VIS irradiation using heavy metal (0.5% Fe, 1%Co, 1%Ni ) doped titania was investigated. The photodegradation experiments were carried out using a Heraeus laboratory scale UV-VIS reactor equipped with a medium-pressure mercury lamp which emits in the range: 320-500 nm. Solutions with (0.34-3.14) x 10-4 M pollutant content were photo-oxidized in the following working conditions: pH = 5-9; photocatalyst dose = 200 mg/L; irradiation time = 30 – 240 minutes. Prior to irradiation, the photocatalyst powder was added to the samples, and solutions were bubbled with air (50 L/hour), in the dark, for 30 min. Dopant type, pH, structure and initial pollutant concentration influence on the degradation efficiency were evaluated in order to set up the optimal working conditions which assure substrate advanced degradation. The kinetics of nitroaromatics degradation and organic nitrogen mineralization was assessed and pseudo-first order rate constants were calculated. Fe doped photocatalyst with lowest metal content (0.5 wt.%) showed a considerable better behaviour in respect to pollutant degradation than Co and Ni (1wt.%) doped titania catalysts. For the same working conditions, degradation efficiency was higher for DNT than DNB in accordance with their calculated adsobance constants (Kad), taking into account that degradation process occurs on catalyst surface following a Langmuir-Hinshalwood model. The presence of methyl group in the structure of DNT allows its degradation by oxidative and reductive pathways, while DNB is converted only by reductive route, which also explain the highest DNT degradation efficiency. For highest pollutant concentration tested (3 x 10-4 M), optimum working conditions (0.5 wt.% Fe doped –TiO2 loading of 200 mg/L, pH=7 and 240 min. irradiation time) assures advanced nitroaromatics degradation (ηDNB=89%, ηDNT=94%) and organic nitrogen mineralization (ηDNB=44%, ηDNT=47%).

Keywords: hazardous organic compounds, irradiation, nitroaromatics, photocatalysis

Procedia PDF Downloads 317
117 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 289
116 Research on Reminiscence Therapy Game Design

Authors: Web Huei Chou, Li Yi Chun, Wenwe Yu, Han Teng Weng, H. Yuan, T. Yang

Abstract:

The prevalence of dementia is estimated to rise to 78 million by 2030 and 139 million by 2050. Among those affected, Alzheimer's disease is the most common form of dementia, contributing to 60–70% of cases. Addressing this growing challenge is crucial, especially considering the impact on older individuals and their caregivers. To reduce the behavioral and psychological symptoms of dementia, people with dementia use a variety of pharmaceutical and non-pharmacological treatments, and some studies have found the use of non-pharmacological interventions. Treatment of depression, cognitive function, and social activities has potential benefits. Butler developed reminiscence therapy as a method of treating dementia. Through ‘life review,’ individuals can recall their past events, activities, and experiences, which can reduce the depression of the elderly and improve their Quality of life to help give meaning to their lives and help them live independently. The life review process uses a variety of memory triggers, such as household items, past objects, photos, and music, and can be conducted collectively or individually and structured or unstructured. However, despite the advantages of nostalgia therapy, past research has always pointed out that current research lacks rigorous experimental evaluation and cannot describe clear research results and generalizability. Therefore, this study aims to study physiological sensing experiments to find a feasible experimental and verification method to provide clearer design and design specifications for reminiscence therapy and to provide a more widespread application for healthy aging. This study is an ongoing research project, a collaboration between the School of Design at Yunlin University of Science and Technology in Taiwan and the Department of Medical Engineering at Chiba University in Japan. We use traditional rice dishes from Taiwan and Japan as nostalgic content to construct a narrative structure for the elderly in the two countries respectively for life review activities, providing an easy-to-carry nostalgic therapy game with an intuitive interactive design. This experiment is expected to be completed in 36 months. The design team constructed and designed the game after conducting literary and historical data surveys and interviews with elders to confirm the nostalgic historical data in Taiwan and Japan. The Japanese team planned the Electrodermal Activity (EDA) and Blood Volume Pulse (BVP) experimental environments and Data calculation model, and then after conducting experiments on elderly people in two places, the research results were analyzed and discussed together. The research has completed the first 24 months of pre-study, design work, and pre-study and has entered the project acceptance stage.

Keywords: reminiscence therapy, aging health, design research, life review

Procedia PDF Downloads 34
115 Four Museums for One (Hi) Story

Authors: Sheyla Moroni

Abstract:

A number of scholars around the world have analyzed the great architectural and urban planning revolution proposed by Skopje 2014, but so far, there are no readings of the parallels between the museums in the Balkan area (including Greece) that share the same name as the museum at the center of that political and cultural revolution. In the former FYROM (now renamed North Macedonia), a museum called "Macedonian Struggle" was born during the reconstruction of the city of Skopje as the new "national" capital. This new museum was built under the "Skopje 2014" plan and cost about 560 million euros (1/3 of the country's GDP). It has been a "flagship" of the government of Nikola Gruevski, leader of the nationalist VMRO-DPMNE party. Until 2016 this museum was close to the motivations of the Macedonian nationalist movement (and later party) active (including terrorist actions) during the 19th and 20th centuries. The museum served to narrate a new "nation-building" after "state-building" had already taken place. But there are three other museums that tell the story of the "Macedonian struggle" by understanding "Macedonia" as a territory other than present-day North Macedonia. The first one is located in Thessaloniki and primarily commemorates the "Greek battle" against the Ottoman Empire. While the first uses a new dark building and many reconstructed rooms and shows the bloody history of the quest for "freedom" for the Macedonian language and people (different from Greeks, Albanians, and Bulgarians), the second is located in an old building in Thessaloniki and in its six rooms on the ground floor graphically illustrates the modern and contemporary history of Greek Macedonia. There are also third and fourth museums: in Kastoria (toward the Albanian border) and in Chromio (near the Greek-North Macedonian border). These two museums (Kastoria and Chromio) are smaller, but they mark two important borders for the (Greek) regions bordering Albania to the east and dividing it to the northwest not only from the Ottoman past but also from two communities felt to be "foreign" (Albanians and former Yugoslav Macedonians). All museums reconstruct a different "national edifice" and emphasize the themes of language and religion. The objective of the research is to understand, through four museums bearing the same name, what are the main "mental boundaries" (religious, linguistic, cultural) of the different states (reconstructed between the late 19th century and 1991). Both classical historiographic methodology (very different between Balkan and "Western" areas) and on-site observation and interactions with different sites are used in this research. An attempt is made to highlight four different political focuses with respect to nation-building and the Public History (and/or propaganda) approaches applied in the construction of these buildings and memorials tendency often that one "defines" oneself by differences from "others" (even if close).

Keywords: nationalisms, museum, nation building, public history

Procedia PDF Downloads 86
114 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback

Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier

Abstract:

This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.

Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning

Procedia PDF Downloads 190
113 Performing Arts and Performance Art: Interspaces and Flexible Transitions

Authors: Helmi Vent

Abstract:

This four-year artistic research project has set the goal of exploring the adaptable transitions within the realms between the two genres. This paper will single out one research question from the entire project for its focus, namely on how and under what circumstances such transitions between a reinterpretation and a new creation can take place during the performative process. The film documentation that accompany the project were produced at the Mozarteum University in Salzburg, Austria, as well as on diverse everyday stages at various locations. The model institution that hosted the project is the LIA – Lab Inter Arts, under the direction of Helmi Vent. LIA combines artistic research with performative applications. The project participants are students from various artistic fields of study. The film documentation forms a central platform for the entire project. They function as audiovisual records of performative performative origins and development processes, while serving as the basis for analysis and evaluation, including the self-evaluation of the recorded material and they also serve as illustrative and discussion material in relation to the topic of this paper. Regarding the “interspaces” and variable 'transitions': The performing arts in the western cultures generally orient themselves toward existing original compositions – most often in the interconnected fields of music, dance and theater – with the goal of reinterpreting and rehearsing a pre-existing score, choreographed work, libretto or script and presenting that respective piece to an audience. The essential tool in this reinterpretation process is generally the artistic ‘language’ performers learn over the course of their main studies. Thus, speaking is combined with singing, playing an instrument is combined with dancing, or with pictorial or sculpturally formed works, in addition to many other variations. If the Performing Arts would rid themselves of their designations from time to time and initially follow the emerging, diffusely gliding transitions into the unknown, the artistic language the performer has learned then becomes a creative resource. The illustrative film excerpts depicting the realms between Performing Arts and Performance Art present insights into the ways the project participants embrace unknown and explorative processes, thus allowing the genesis of new performative designs or concepts to be invented between the participants’ acquired cultural and artistic skills and their own creations – according to their own ideas and issues, sometimes with their direct involvement, fragmentary, provisional, left as a rough draft or fully composed. All in all, it is an evolutionary process and its key parameters cannot be distilled down to their essence. Rather, they stem from a subtle inner perception, from deep-seated emotions, imaginations, and non-discursive decisions, which ultimately result in an artistic statement rising to the visible and audible surface. Within these realms between performing arts and performance art and their extremely flexible transitions, exceptional opportunities can be found to grasp and realise art itself as a research process.

Keywords: art as research method, Lab Inter Arts ( LIA ), performing arts, performance art

Procedia PDF Downloads 272
112 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study

Authors: Insiya Bhalloo

Abstract:

It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.

Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition

Procedia PDF Downloads 358
111 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics

Authors: Mark Wiebe

Abstract:

One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.

Keywords: theology, philosophy of religion, metaphysics, philosophy

Procedia PDF Downloads 75
110 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 97
109 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 184