Search results for: water resources yield and planning model
649 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 301648 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority
Authors: Vahid Asadzadeh, Amin Ataee
Abstract:
Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau and Mouffe had in mind) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media
Procedia PDF Downloads 145647 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects
Authors: Rabea Sefrin, Christian Glock, Juergen Schnell
Abstract:
The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation
Procedia PDF Downloads 116646 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 229645 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 90644 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico
Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba
Abstract:
In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems
Procedia PDF Downloads 131643 A New Measurement for Assessing Constructivist Learning Features in Higher Education: Lifelong Learning in Applied Fields (LLAF) Tempus Project
Authors: Dorit Alt, Nirit Raichel
Abstract:
Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities.This form of instruction is criticized for encouraging students to acquire inert knowledge that can be used in instructional settings at best, however cannot be transferred into real-life complex problem settings. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools, based on the constructivist approach for learning that put a premium on adaptability to the emerging requirements of present society. This presentation will be limited to teachers' education only and to the contribution of the project in providing a scale designed to measure the extent to which the constructivist activities are efficiently applied in the learning environment. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.Keywords: constructivist learning, higher education, mix-methodology, structural equation modeling
Procedia PDF Downloads 314642 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 105641 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor
Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang
Abstract:
To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel
Procedia PDF Downloads 353640 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 88639 The Effectiveness of a Six-Week Yoga Intervention on Body Awareness, Warnings of Relapse, and Emotion Regulation among Incarcerated Females
Authors: James Beauchemin
Abstract:
Introduction: The incarceration of people with mental illness and substance use disorders is a major public health issue, with social, clinical, and economic implications. Yoga participation has been associated with numerous psychological benefits; however, there is a paucity of research examining impacts of yoga with incarcerated populations. The purpose of this study was to evaluate effectiveness of a six-week yoga intervention on several mental health-related variables, including emotion regulation, body awareness, and warnings of substance relapse among incarcerated females. Methods: This study utilized a pre-post, three-arm design, with participants assigned to intervention, therapeutic community, or general population groups. A between-groups analysis of covariance (ANCOVA) was conducted across groups to assess intervention effectiveness using the Difficulties in Emotion Regulation Scale (DERS), Scale of Body Connection (SBC), and Warnings of Relapse (AWARE) Questionnaire. Results: ANCOVA results for warnings of relapse (AWARE) revealed significant between-group differences F(2, 80) = 7.15, p = .001; np2 = .152), with significant pairwise comparisons between the intervention group and both the therapeutic community (p = .001) and the general population (p = .005) groups. Similarly, significant differences were found for emotional regulation (DERS) F(2, 83) = 10.521, p = .000; np2 = .278). Pairwise comparisons indicated a significant difference between the intervention and general population (p = .01). Finally, significant differences between the intervention and control groups were found for body awareness (SBC) F(2, 84) = 3.69, p = .029; np2 = .081). Between-group differences were clarified via pairwise comparisons, indicating significant differences between the intervention group and both the therapeutic community (p = .028) and general population groups (p = .020). Implications: Study results suggest that yoga may be an effective addition to integrative mental health and substance use treatment for incarcerated women, and contributes to increasing evidence that holistic interventions may be an important component for treatment with this population. Specifically, given the prevalence of mental health and substance use disorders, findings revealed that changes in body awareness and emotion regulation may be particularly beneficial for incarcerated populations with substance use challenges as a result of yoga participation. From a systemic perspective, this proactive approach may have long-term implications for both physical and psychological well-being for the incarcerated population as a whole, thereby decreasing the need for traditional treatment. By integrating a more holistic, salutogenic model that emphasizes prevention, interventions like yoga may work to improve the wellness of this population, while providing an alternative or complementary treatment option for those with current symptoms.Keywords: yoga, mental health, incarceration, wellness
Procedia PDF Downloads 138638 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis
Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch
Abstract:
Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.Keywords: diaschisis, dysgeusia, stroke, taste
Procedia PDF Downloads 178637 A Brief Review on the Relationship between Pain and Sociology
Authors: Hanieh Sakha, Nader Nader, Haleh Farzin
Abstract:
Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.Keywords: pain, sociology, sociological, body
Procedia PDF Downloads 70636 Immuno-Protective Role of Mucosal Delivery of Lactococcus lactis Expressing Functionally Active JlpA Protein on Campylobacter jejuni Colonization in Chickens
Authors: Ankita Singh, Chandan Gorain, Amirul I. Mallick
Abstract:
Successful adherence of the mucosal epithelial cells is the key early step for Campylobacter jejuni pathogenesis (C. jejuni). A set of Surface Exposed Colonization Proteins (SECPs) are among the major factors involved in host cell adherence and invasion of C. jejuni. Among them, constitutively expressed surface-exposed lipoprotein adhesin of C. jejuni, JlpA, interacts with intestinal heat shock protein 90 (hsp90α) and contributes in disease progression by triggering pro-inflammatory response via activation of NF-κB and p38 MAP kinase pathway. Together with its ability to express in the bacterial surface, higher sequence conservation and predicted predominance of several B cells epitopes, JlpA protein reserves its potential to become an effective vaccine candidate against wide range of Campylobacter sps including C. jejuni. Given that chickens are the primary sources for C. jejuni and persistent gut colonization remain as major cause for foodborne pathogenesis to humans, present study explicitly used chickens as model to test the immune-protective efficacy of JlpA protein. Taking into account that gastrointestinal tract is the focal site for C. jejuni colonization, to extrapolate the benefit of mucosal (intragastric) delivery of JlpA protein, a food grade Nisin inducible Lactic acid producing bacteria, Lactococcus lactis (L. lactis) was engineered to express recombinant JlpA protein (rJlpA) in the surface of the bacteria. Following evaluation of optimal surface expression and functionality of recombinant JlpA protein expressed by recombinant L. lactis (rL. lactis), the immune-protective role of intragastric administration of live rL. lactis was assessed in commercial broiler chickens. In addition to the significant elevation of antigen specific mucosal immune responses in the intestine of chickens that received three doses of rL. lactis, marked upregulation of Toll-like receptor 2 (TLR2) gene expression in association with mixed pro-inflammatory responses (both Th1 and Th17 type) was observed. Furthermore, intragastric delivery of rJlpA expressed by rL. lactis, but not the injectable form, resulted in a significant reduction in C. jejuni colonization in chickens suggesting that mucosal delivery of live rL. lactis expressing JlpA serves as a promising vaccine platform to induce strong immune-protective responses against C. jejuni in chickens.Keywords: chickens, lipoprotein adhesion of Campylobacter jejuni, immuno-protection, Lactococcus lactis, mucosal delivery
Procedia PDF Downloads 137635 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry
Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure
Procedia PDF Downloads 141634 Aesthetics and Semiotics in Theatre Performance
Authors: Păcurar Diana Istina
Abstract:
Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified
Procedia PDF Downloads 109633 The Moderating Role of Test Anxiety in the Relationships Between Self-Efficacy, Engagement, and Academic Achievement in College Math Courses
Authors: Yuqing Zou, Chunrui Zou, Yichong Cao
Abstract:
Previous research has revealed relationships between self-efficacy (SE), engagement, and academic achievement among students in Western countries, but these relationships remain unknown in college math courses among college students in China. In addition, previous research has shown that test anxiety has a direct effect on engagement and academic achievement. However, how test anxiety affects the relationships between SE, engagement, and academic achievement is still unknown. In this study, the authors aimed to explore the mediating roles of behavioral engagement (BE), emotional engagement (EE), and cognitive engagement (CE) in the association between SE and academic achievement and the moderating role of test anxiety in college math courses. Our hypotheses are that the association between SE and academic achievement was mediated by engagement and that test anxiety played a moderating role in the association. To explore the research questions, the authors collected data through self-reported surveys among 147 students at a northwestern university in China. Self-reported surveys were used to collect data. The motivated strategies for learning questionnaire (MSLQ) (Pintrich, 1991), the metacognitive strategies questionnaire (Wolters, 2004), and the engagement versus disaffection with learning scale (Skinner et al., 2008) were used to assess SE, CE, and BE and EE, respectively. R software was used to analyze the data. The main analyses used were reliability and validity analysis of scales, descriptive statistics analysis of measured variables, correlation analysis, regression analysis, and structural equation modeling (SEM) analysis and moderated mediation analysis to look at the structural relationships between variables at the same time. The SEM analysis indicated that student SE was positively related to BE, EE, and CE and academic achievement. BE, EE, and CE were all positively associated with academic achievement. That is, as the authors expected, higher levels of SE led to higher levels of BE, EE, and CE, and greater academic achievement. Higher levels of BE, EE, and CE led to greater academic achievement. In addition, the moderated mediation analysis found that the path of SE to academic achievement in the model was as significant as expected, as was the moderating effect of test anxiety in the SE-Achievement association. Specifically, test anxiety was found to moderate the association between SE and BE, the association between SE and CE, and the association between EE and Achievement. The authors investigated possible mediating effects of BE, EE, and CE in the associations between SE and academic achievement, and all indirect effects were found to be significant. As for the magnitude of mediations, behavioral engagement was the most important mediator in the SE-Achievement association. This study has implications for college teachers, educators, and students in China regarding ways to promote academic achievement in college math courses, including increasing self-efficacy and engagement and lessening test anxiety toward math.Keywords: academic engagement, self-efficacy, test anxiety, academic achievement, college math courses, behavioral engagement, cognitive engagement, emotional engagement
Procedia PDF Downloads 92632 Criteria to Access Justice in Remote Criminal Trial Implementation
Authors: Inga Žukovaitė
Abstract:
This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.Keywords: remote criminal proceedings, fair trial, right to defence, technology progress
Procedia PDF Downloads 71631 Prediction of Sound Transmission Through Framed Façade Systems
Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul
Abstract:
With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system
Procedia PDF Downloads 59630 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 63629 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 174628 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 113627 Instruction Program for Human Factors in Maintenance, Addressed to the People Working in Colombian Air Force Aeronautical Maintenance Area to Strengthen Operational Safety
Authors: Rafael Andres Rincon Barrera
Abstract:
Safety in global aviation plays a preponderant role in organizations that seek to avoid accidents in an attempt to preserve their most precious assets (the people and the machines). Human factors-based programs have shown to be effective in managing human-generated risks. The importance of training on human factors in maintenance has not been indifferent to the Colombian Air Force (COLAF). This research, which has a mixed quantitative, qualitative and descriptive approach, deals with its absence of structuring an instruction program in Human Factors in Aeronautical Maintenance, which serves as a tool to improve Operational Safety in the military air units of the COLAF. Research shows the trends and evolution of human factors programs in aeronautical maintenance through the analysis of a data matrix with 33 sources taken from different databases that are about the incorporation of these types of programs in the aeronautical industry in the last 20 years; as well as the improvements in the operational safety process that are presented after the implementation of these ones. Likewise, it compiles different normative guides in force from world aeronautical authorities for training in these programs, establishing a matrix of methodologies that may be applicable to develop a training program in human factors in maintenance. Subsequently, it illustrates the design, validation, and development of a human factors knowledge measurement instrument for maintenance at the COLAF that includes topics on Human Factors (HF), Safety Management System (SMS), and aeronautical maintenance regulations at the COLAF. With the information obtained, it performs the statistical analysis showing the aspects of knowledge and strengthening the staff for the preparation of the instruction program. Performing data triangulation based on the applicable methods and the weakest aspects found in the maintenance people shows a variable crossing from color coding, thus indicating the contents according to a training program for human factors in aeronautical maintenance, which are adjusted according to the competencies that are expected to be developed with the staff in a curricular format established by the COLAF. Among the most important findings are the determination that different authors are dealing with human factors in maintenance agrees that there is no standard model for its instruction and implementation, but that it must be adapted to the needs of the organization, that the Safety Culture in the Companies which incorporated programs on human factors in maintenance increased, that from the data obtained with the instrument for knowledge measurement of human factors in maintenance, the level of knowledge is MEDIUM-LOW with a score of 61.79%. And finally that there is an opportunity to improve Operational Safety for the COLAF through the implementation of the training program of human factors in maintenance for the technicians working in this area.Keywords: Colombian air force, human factors, safety culture, safety management system, triangulation
Procedia PDF Downloads 134626 Khiaban (the Street) as an Ancient Percept of the Iranian Urban Landscape: An Aesthetic Reading of Lalehzar Street, the First Modern Khiaban in Iran
Authors: Mohammad Atashinbar
Abstract:
Lalehzar was one of the main streets in central Tehran in late Qajar and 1st Pahlavi (1880-1940) and a center of attention for the government. It was a natural walk during the last decade of the reign of Nasser al-Din Shah (1880-1895). However, this street lost its prosperity status under the 2nd Pahlavi and evolved from a modern cultural street to a commercial corridor. Lalehzar's decline was the result of the immigration of the upper class from the inner city to the northern part and the consequent transfer of amenities and luxury goods with them. It seems that during Lalehzar's six decades of prosperity, this khiâbân has received an aesthetic look, which has made it enjoyable and appreciated by Tehran’s people. Various post-revolutionary urban management measures have been taken to revive Lalehzar and improve the quality of its urban life. Since the beginning of the Safavid era, the khiâbân was accompanied by the concept of urban space, and its characteristics are explained by referring to the main axis of the Persian Garden with rows of trees, streams, and a line of flowers on both sides. The construction of a street inside the city as an urban space benefits from a mental concept as a spiritual and exciting space, especially in common forms in the Persian Garden. Before that, the khiâbân was a religious and mythical concept, and we can even say that the mastery of this concept led to its appearance in the garden. In Tehran, Lalehzar Street is a gateway to modernity. The aesthetic changes in Lalehzar Street, inspired by Nasser al-Din Shah's journey to Europe around 1870, coinciding with the changes in architectural and urban landscape movements around the world between 1880 and 1940. The Shah is impressed by the modernist urbanism and, in particular, the Champs-Élysées in Paris. A tree-lined promenade with the hallmarks of the Persian Garden is familiar to Nasser al-Din Shah's mental image of beauty. In its state of mind, the main axis of the Persian Garden has the characteristics of a promenade. Therefore, the origins of the aesthetic of Lalehzar Street come from the aesthetics of the khiâbân. Admitting that the Champs-Élysées served as a model for Lalehzar, it seems that the Shah wanted to associate the Champs-Élysées with Lalehzar and highlight its landscape aspects by building this street. Depending on whether the percepts have their own aesthetic, this proposal seeks to analyze the aesthetic evolutions of the khiâbân as a percept towards the street as a component of the urban landscape in Lalehzar. The research attempts to review the aesthetic aspects of Lalehzar between 1880-1940 by using iconographic analysis, based on the available historical data, to find the leading aesthetics principles of this street. The aesthetic view to Lalehzar as an artwork is one of the main achievements of this study.Keywords: Lalehzar, aesthetics, percept, Tehran, street
Procedia PDF Downloads 149625 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 134624 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 120623 Efficacy and Safety of Computerized Cognitive Training Combined with SSRIs for Treating Cognitive Impairment Among Patients with Late-Life Depression: A 12-Week, Randomized Controlled Study
Authors: Xiao Wang, Qinge Zhang
Abstract:
Background: This randomized, open-label study examined the therapeutic effects of computerized cognitive training (CCT) combined with selective serotonin reuptake inhibitors (SSRIs) on cognitive impairment among patients with late-life depression (LLD). Method: Study data were collected from May 5, 2021, to April 21, 2023. Outpatients who met diagnostic criteria for major depressive disorder according to the fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria (i.e., a total score on the 17-item Hamilton Depression Rating Scale (HAMD-17) ≥ 18 and a total score on the Montreal Cognitive Assessment scale (MOCA) <26) were randomly assigned to receive up to 12 weeks of CCT and SSRIs treatment (n=57) or SSRIs and Control treatment (n=61). The primary outcome was the change in Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores from baseline to week 12 between the two groups. The secondary outcomes included changes in the HAMD-17 score, Hamilton Anxiety Scale (HAMA) score and Neuropsychiatric Inventory (NPI) score. Mixed model repeated measures (MMRM) analysis was performed on modified intention-to-treat (mITT) and completer populations. Results: The full analysis set (FAS) included 118 patients (CCT and SSRIs group, n=57; SSRIs and Control group, n =61). Over the 12-week study period, the reduction in the ADAS-cog total score was significant (P < 0.001) in both groups, while MMRM analysis revealed a significantly greater reduction in cognitive function (ADAS-cog total scores) from baseline to posttreatment in the CCT and SSRIs group than in the SSRI and Control group [(F (1,115) =13.65, least-squares mean difference [95% CI]: −2.77 [−3.73, −1.81], p<0.001)]. There were significantly greater improvements in depression symptoms (measured by the HAMD-17) in the CCT and SSRIs group than in the control group [MMRM, estimated mean difference (SE) between groups −3.59 [−5.02, −2.15], p < 0.001]. The least-squares mean changes in the HAMA scores and NPI scores between baseline and week 8 were greater in the CCT and SSRIs group than in the control group (all P < 0.05). There was no significant difference between groups on response rates and remission rates by using the last-observation-carried-forward (LOCF) method (all P > 0.05). The most frequent adverse events (AEs) in both groups were dry mouth, somnolence, and constipation. There was no significant difference in the incidence of adverse events between the two groups. Conclusions: CCT combined with SSRIs was efficacious and well tolerated in LLD patients with cognitive impairment.Keywords: late-life depression, cognitive function, computerized cognitive training, SSRIs
Procedia PDF Downloads 50622 Insect Cell-Based Models: Asutralian Sheep bBlowfly Lucilia Cuprina Embryo Primary Cell line Establishment and Transfection
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls, and the parasite has developed resistance to nearly all control chemicals used in the past. It is, therefore, critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi, and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: lucilia cuprina, primary cell line establishment, RNA interference, insect cell transfection
Procedia PDF Downloads 72621 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties
Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi
Abstract:
Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling
Procedia PDF Downloads 66620 Exploring the Entrepreneur-Function in Uncertainty: Towards a Revised Definition
Authors: Johan Esbach
Abstract:
The entrepreneur has traditionally been defined through various historical lenses, emphasising individual traits, risk-taking, speculation, innovation and firm creation. However, these definitions often fail to address the dynamic nature of the modern entrepreneurial functions, which respond to unpredictable uncertainties and transition to routine management as certainty is achieved. This paper proposes a revised definition, positioning the entrepreneur as a dynamic function rather than a human construct, that emerges to address specific uncertainties in economic systems, but fades once uncertainty is resolved. By examining historical definitions and its limitations, including the works of Cantillon, Say, Schumpeter, and Knight, this paper identifies a gap in literature and develops a generalised definition for the entrepreneur. The revised definition challenges conventional thought by shifting focus from static attributes such as alertness, traits, firm creation, etc., to a dynamic role that includes reliability, adaptation, scalability, and adaptability. The methodology of this paper employs a mixed approach, combining theoretical analysis and case study examination to explore the dynamic nature of the entrepreneurial function in relation to uncertainty. The selection of case studies includes companies like Airbnb, Uber, Netflix, and Tesla, as these firms demonstrate a clear transition from entrepreneurial uncertainty to routine certainty. The data from the case studies is then analysed qualitatively, focusing on the patterns of entrepreneurial function across the selected companies. These results are then validated using quantitative analysis, derived from an independent survey. The primary finding of the paper will validate the entrepreneur as a dynamic function rather than a static, human-centric role. In considering the transition from uncertainty to certainty in companies like Airbnb, Uber, Netflix, and Tesla, the study shows that the entrepreneurial function emerges explicitly to address market, technological, or social uncertainties. Once these uncertainties are resolved and a certainty in the operating environment is established, the need for the entrepreneurial function ceases, giving way to routine management and business operations. The paper emphasises the need for a definitive model that responds to the temporal and contextualised nature of the entrepreneur. In adopting the revised definition, the entrepreneur is positioned to play a crucial role in the reduction of uncertainties within economic systems. Once the uncertainties are addressed, certainty is manifested in new combinations or new firms. Finally, the paper outlines policy implications for fostering environments that enables the entrepreneurial function and transition theory.Keywords: dynamic function, uncertainty, revised definition, transition
Procedia PDF Downloads 19