Search results for: linear and body measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9393

Search results for: linear and body measurements

603 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol

Authors: S. B. R. Slagmulder

Abstract:

Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.

Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive

Procedia PDF Downloads 49
602 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 420
601 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution

Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai

Abstract:

Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.

Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning

Procedia PDF Downloads 350
600 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure

Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu

Abstract:

In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.

Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency

Procedia PDF Downloads 159
599 The Beneficial Effects of Inhibition of Hepatic Adaptor Protein Phosphotyrosine Interacting with PH Domain and Leucine Zipper 2 on Glucose and Cholesterol Homeostasis

Authors: Xi Chen, King-Yip Cheng

Abstract:

Hypercholesterolemia, characterized by high low-density lipoprotein cholesterol (LDL-C), raises cardiovascular events in patients with type 2 diabetes (T2D). Although several drugs, such as statin and PCSK9 inhibitors, are available for the treatment of hypercholesterolemia, they exert detrimental effects on glucose metabolism and hence increase the risk of T2D. On the other hand, the drugs used to treat T2D have minimal effect on improving the lipid profile. Therefore, there is an urgent need to develop treatments that can simultaneously improve glucose and lipid homeostasis. Adaptor protein phosphotyrosine interacting with PH domain and leucine zipper 2 (APPL2) causes insulin resistance in the liver and skeletal muscle via inhibiting insulin and adiponectin actions in animal models. Single-nucleotide polymorphisms in the APPL2 gene were associated with LDL-C, non-alcoholic fatty liver disease, and coronary artery disease in humans. The aim of this project is to investigate whether APPL2 antisense oligonucleotide (ASO) can alleviate dietary-induced T2D and hypercholesterolemia. High-fat diet (HFD) was used to induce obesity and insulin resistance in mice. GalNAc-conjugated APPL2 ASO (GalNAc-APPL2-ASO) was used to silence hepatic APPL2 expression in C57/BL6J mice selectively. Glucose, lipid, and energy metabolism were monitored. Immunoblotting and quantitative PCR analysis showed that GalNAc-APPL2-ASO treatment selectively reduced APPL2 expression in the liver instead of other tissues, like adipose tissues, kidneys, muscle, and heart. The glucose tolerance test and insulin sensitivity test revealed that GalNAc-APPL2-ASO improved glucose tolerance and insulin sensitivity progressively. Blood chemistry analysis revealed that the mice treated with GalNAc-APPL2-ASO had significantly lower circulating levels of total cholesterol and LDL cholesterol. However, there was no difference in circulating levels of high-density lipoprotein (HDL) cholesterol, triglyceride, and free fatty acid between the mice treated with GalNac-APPL2-ASO and GalNAc-Control-ASO. No obvious effect on food intake, body weight, and liver injury markers after GalNAc-APPL2-ASO treatment was found, supporting its tolerability and safety. We showed that selectively silencing hepatic APPL2 alleviated insulin resistance and hypercholesterolemia and improved energy metabolism in the dietary-induced obese mouse model, indicating APPL2 as a therapeutic target for metabolic diseases.

Keywords: APPL2, antisense oligonucleotide, hypercholesterolemia, type 2 diabetes

Procedia PDF Downloads 45
598 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses

Authors: Philna Coetzee, Clara Msiza

Abstract:

In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.

Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses

Procedia PDF Downloads 141
597 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 212
596 Effect of Progressive Muscle Relaxation on the Postpartum Depression and General Comfort Levels

Authors: İlknur Gökşin, Sultan Ayaz Alkaya

Abstract:

Objective: Progressive muscle relaxation (PMR) include the deliberate stretching and relaxation of the major muscle groups of the human body. This study was conducted to evaluate the effect of PMR applied in women on the postpartum depression and general comfort level. Methods: The study population of this quasi-experimental study with pre-test, post-test and control group consisted of primipara women who had vaginal delivery in the obstetric service of a university hospital. The experimental and control groups consisted of 35 women each. The data were collected by questionnaire, the Edinburgh Postnatal Depression Scale (EPDS) and the General Comfort Questionnaire (GCQ). The women were matched according to their age and education level and divided into the experimental and control groups by simple random selection. Postpartum depression risk and general comfort was evaluated at the 2nd and 5th days, 10th and 15th days, fourth week and eighth week after birth. The experimental group was visited at home and PMR was applied. After the first visit, women were asked to apply PMR regularly three times a week for eight weeks. During the application, the researcher called the participants twice a week to follow up the continuity of the application. No intervention was performed in the control group. For data analysis, descriptive statistics such as number, percentage, mean, standard deviation, significance test of difference between two means and ANOVA were used. Approval of the ethics committee and permission of the institution were obtained for the study. Results: There were no significant differences between the women in the experimental and control groups in terms of age, education status and employment status (p>0.05). There was no statistically significant difference between the experimental and control groups in terms of EPDS pre-test, 1st, 2nd and 3rd follow-up mean scores (p>0.05). There was a statistically significant difference between EPDS pre-test and 3rd follow-up scores of the experimental group (p<0.05), whereas there was no such difference in the control group (p>0.05). There was no statistically significant difference between the experimental and control groups in terms of mean GCQ pre-test scores (p>0.05), whereas in the 1st, 2nd and 3rd follow-ups there was a statistically significant difference between the mean GCQ scores (p<0.05). It was found that there was a significant increase in the GCQ physical, psychospiritual and sociocultural comfort sub-scales, relief and relaxation levels of the experimental group between the pre-test and 3rd follow-ups scores (p<0.05). And, a significant decrease was found between pre-test and 3rd follow-up GCQ psychospiritual, environmental and sociocultural comfort sub-scale, relief, relaxation and superiority levels (p<0.05). Conclusion: Progressive muscle relaxation was effective on reducing the postpartum depression risk and increasing general comfort. It is recommended to provide progressive muscle relaxation training to women in the postpartum period as well as ensuring the continuity of this practice.

Keywords: general comfort, postpartum depression, postpartum period, progressive muscle relaxation

Procedia PDF Downloads 242
595 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan

Authors: Abdel-Monem Sayed Mohamed

Abstract:

Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.

Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation

Procedia PDF Downloads 350
594 Immune Disregulation in Inflammatory Skin Diseases with Comorbid Metabolic Disorders

Authors: Roman Khanferyan, Levon Gevorkyan, Ivan Radysh

Abstract:

Skin barrier dysfunction induces multiple inflammatory skin diseases. Epidemiological studies clearly support the link between most dermatological pathologies, immune disorders and metabolic disorders. Among them most common are psoriasis (PS) and Atopic dermatitis (AD). Psoriasis is a chronic immune-mediated inflammatory skin disease that affects 1.5 to 3.0% of the world's population. Comorbid metabolic disorders play an important role in the progression of PS and AD, as well. It is well known that PS, AD and overweight/obesity are associated with common pathophysiological mechanisms of mild chronic inflammation. The goal of the study was to study the immune disturbances in patients with PS, AD and comorbid metabolic disorders. To study the prevalence of comorbidity of PS and AD (data from 1406 patient’s histories of diseases) were analyzed. The severity of the disease is assessed using the PASI index (Psoriasis Area and Severity Index). 59 patients with psoriasis of different localizations of lesions and severity, as well as with different body mass index (BMI), were examined. The determination of the concentration of pro-inflammatory cytokines (IL-6, IL-8, IFNγ, IL-17, L-18 and TNFa) and chemokines (RANTES, IP-10, MCP-1 and Eotaxin) in sera and supernatants of 48h-cultivated peripheral blood mononuclear cell (PBMC) of psoriasis patients and healthy volunteers (36 adults) have been carried out by multiplex assay (Luminex Corporation, USA). It has been demonstrated that 42% of PS patients had comorbidity with different types of atopies. The most common was bronchial asthma and allergic rhinitis. At the same time, the prevalence of AD in PS patients was determined in 8.7% of patients. It has been shown that serum levels of all studied cytokines (IL-6, IL-8, IFNγ, IL-17, L-18 and TNF) in most of the studied patients were higher in PS patients than in those with AD and healthy controls (p<0.05). An in vitro synthesis of the IL-6 and IFNγ by PBMC demonstrated similar results to those determined in blood sera. There was a high correlation between BMI, immune mediators and the concentrations of adipokines and chemokines (p<0.05). The concentrations of Leptin and Resistin in obese psoriatic patients were greater by 28.6% and 17%, respectively, compared to non-obese psoriatic patients. In obese patients with psoriasis the serum levels of adiponectin were decreased up to 1.3-fold. The mean serum RANTES, IP-10, MCP-1, EOTAXIN levels in obese psoriatic patients were decreased by up to 13.1%, 21.9%, 40.4% and 28.2%, respectively. Similar results have been demonstrated in AD patients with comorbid overweight and obesity. Thus, the study demonstrated the important role of cytokines and chemokines dysregulation in inflammatory skin diseases, especially in patients with comorbid obesity and overweight. Metabolic disorders promote the severity of PS and AD, highly increase immune dysregulation, and synthesis of adipokines, which correlates with the production of proinflammatory immune mediators in comorbid obesity and overweight.

Keywords: psoriasis, atopic dermatitis, pro-inflammatory cytokines, chemokines, comorbid obesity

Procedia PDF Downloads 11
593 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics

Authors: Mark Wiebe

Abstract:

One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.

Keywords: theology, philosophy of religion, metaphysics, philosophy

Procedia PDF Downloads 54
592 Coping with Incompatible Identities in Russia: Case of Orthodox Gays

Authors: Siuzan Uorner

Abstract:

The era of late modernity is characterized, on the one hand, by social disintegration, values of personal freedom, tolerance, and self-expression. Boundaries between the accessible and the elitist, normal and abnormal are blurring. On the other hand, traditional social institutions, such as religion (especially Russian Orthodox Church), exist, criticizing lifestyle and worldview other than conventionally structured canons. Despite the declared values and opportunities in late modern society, people's freedom is ambivalent. Personal identity and its aspects are becoming a subject of choice. Hence, combinations of identity aspects can be incompatible. Our theoretical framework is based on P. Ricoeur's concept of narrative identity and hermeneutics, E. Goffman’s theory of social stigma, self-presentation, discrepant roles and W. James lectures about varieties of religious experience. This paper aims to reconstruct ways of coping with incompatible identities of Orthodox gays (an extreme sampling of a combination of sexual orientation and religious identity in a heteronormative society). This study focuses on the discourse of Orthodox gay parishioners and ROC gay priests in Russia (sampling ‘hard to reach’ populations because of the secrecy of gay community in ROC and sensitivity of the topic itself). We conducted a qualitative research design, using in-depth personal semi-structured online-interviews. Recruiting of informants took place in 'Nuntiare et Recreare' (Russian movement of religious LGBT) page in VKontakte through the post with an invitation to participate in the research. In this work, we analyzed interview transcripts using axial coding. We chose the Grounded Theory methodology to construct a theory from empirical data and contribute to the growing body of knowledge in ways of harmonizing incompatible identities in late modern societies. The research has found that there are two types of conflicts Orthodox gays meet with: canonic contradictions (postulates of Scripture and its interpretations) and problems in social interaction, mainly with ROC priests and Orthodox parishioners. We have revealed semantic meanings of most commonly used words that appear in the narratives (words such as ‘love’, ‘sin’, ‘religion’ etc.). Finally, we have reconstructed biographical patterns of LGBT social movements’ involvement. This paper argues that all incompatibilities are harmonizing in the narrative itself. As Ricoeur has suggested, the narrative configuration allows the speaker to gather facts and events together and to compose causal relationships between them. Sexual orientation and religious identity are getting along and harmonizing in the narrative.

Keywords: gay priests, incompatible identities, narrative identity, Orthodox gays, religious identity, ROC, sexual orientation

Procedia PDF Downloads 114
591 Reducing Falls in Memory Care through Implementation of the Stopping Elderly Accidents, Deaths, and Injuries Program

Authors: Cory B. Lord

Abstract:

Falls among the elderly population has become an area of concern in healthcare today. The negative impacts of falls lead to increased morbidity, mortality, and financial burdens for both patients and healthcare systems. Falls in the United States is reported at an annual rate of 36 million in those aged 65 and older. Each year, one out of four people in this age group will suffer a fall, with 20% of these falls causing injury. The setting for this Doctor of Nursing Practice (DNP) project was a memory care unit in an assisted living community, as these facilities house cognitively impaired older adults. These communities lack fall prevention programs; therefore, the need exists to add to the body of knowledge to positively impact this population. The objective of this project was to reduce fall rates through the implementation of the Center for Disease Control and Prevention (CDC) STEADI (stopping elderly accidents, deaths, and injuries) program. The DNP project performed was a quality improvement pilot study with a pre and post-test design. This program was implemented in the memory care setting over 12 weeks. The project included an educational session for staff and a fall risk assessment with appropriate resident referrals. The three aims of the DNP project were to reduce fall rates among the elderly aged 65 and older who reside in the memory care unit, increase staff knowledge of STEADI fall prevention measures after an educational session, and assess the willingness of memory care unit staff to adopt an evidence-based a fall prevention program. The Donabedian model was used as a guiding conceptual framework for this quality improvement pilot study. The fall rate data for 12 months before the intervention was evaluated and compared to post-intervention fall rates. The educational session comprised of a pre and post-test to assess staff knowledge of the fall prevention program and the willingness of staff to adopt the fall prevention program. The overarching goal was to reduce falls in the elderly population who live in memory care units. The results of the study showed, on average that the fall rate during the implementation period of STEADI (μ=6.79) was significantly lower when compared to the prior 12 months (μ= 9.50) (p=0.02, α = 0.05). The mean staff knowledge scores improved from pretest (μ=77.74%) to post-test (μ=87.42%) (p=0.00, α= 0.05) after the education session. The results of the willingness to adopt a fall prevention program were scored at 100%. In summation, implementing the STEADI fall prevention program can assist in reducing fall rates for residents aged 65 and older who reside in a memory care setting.

Keywords: dementia, elderly, falls, STEADI

Procedia PDF Downloads 109
590 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 153
589 Measuring the Impact of Implementing an Effective Practice Skills Training Model in Youth Detention

Authors: Phillipa Evans, Christopher Trotter

Abstract:

Aims: This study aims to examine the effectiveness of a practice skills framework implemented in three youth detention centres in Juvenile Justice in New South Wales (NSW), Australia. The study is supported by a grant from and Australian Research Council and NSW Juvenile Justice. Recent years have seen a number of incidents in youth detention centres in Australia and other places. These have led to inquiries and reviews with some suggesting that detention centres often do not even meet basic human rights and do little in terms of providing opportunities for rehabilitation of residents. While there is an increasing body of research suggesting that community based supervision can be effective in reducing recidivism if appropriate skills are used by supervisors, there has been less work considering worker skills in youth detention settings. The research that has been done, however, suggest that teaching interpersonal skills to youth officers may be effective in enhancing the rehabilitation culture of centres. Positive outcomes have been seen in a UK detention centre for example, from teaching staff to do five-minute problem-solving interventions. The aim of this project is to examine the effectiveness of training and coaching youth detention staff in three NSW detention centres in interpersonal practice skills. Effectiveness is defined in terms of reductions in the frequency of critical incidents and improvements in the well-being of staff and young people. The research is important as the results may lead to the development of more humane and rehabilitative experiences for young people. Method: The study involves training staff in core effective practice skills and supporting staff in the use of those skills through supervision and de-briefing. The core effective practice skills include role clarification, pro-social modelling, brief problem solving, and relationship skills. The training also addresses some of the background to criminal behaviour including trauma. Data regarding critical incidents and well-being before and after the program implementation are being collected. This involves interviews with staff and young people, the completion of well-being scales, and examination of departmental records regarding critical incidents. In addition to the before and after comparison a matched control group which is not offered the intervention is also being used. The study includes more than 400 young people and 100 youth officers across 6 centres including the control sites. Data collection includes interviews with workers and young people, critical incident data such as assaults, use of lock ups and confinement and school attendance. Data collection also includes analysing video-tapes of centre activities for changes in the use of staff skills. Results: The project is currently underway with ongoing training and supervision. Early results will be available for the conference.

Keywords: custody, practice skills, training, youth workers

Procedia PDF Downloads 80
588 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 128
587 Decreasing the Oxidative Stress in Autistic Children: A Randomized Double-Blind Controlled Study With Palm Dates Fruit

Authors: Ammal Mokhtar Metwally, Amal Elsaied, Ghada A. Abdel-Latef, Ebtissam M. Salah El-Din, Hanaa R. M. Attia

Abstract:

The link between various diet therapies and autism is controversial and limited. Nutritional interventions aim to increase antioxidant levels suggesting a positive effect on the improvement of autism severity. In this study, the effectiveness of a 90-day Dates fruits consumption fruits (a non-pharmacological and risk-free option) on alleviating autism severity symptoms in individuals with ASD was investigated. The study examined also whether the baseline or improvements of some of the clinical and laboratory characteristics of the subjects affected their response to dates fruits intake on the severity of ASD symptoms. Methodology: This study involved a randomized controlled, double-blind 3-month dates fruits intake. 131 Egyptian children aged 3-12 years with confirmed ASD were enrolled in the study. cases were randomized in one of the three groups as follows; 1st regimen: Group I on 3 dates’ fruits/day (47 cases), 2nd regimen: Group II on 5 dates’ fruits/day (42 cases), and 3rd regimen: group III; nondates group (42 cases). ASD severity was assessed using both the Diagnostic and statistical manual of mental disorders, 5th ed. (DSM-V) criteria and the Childhood Autism Rating Scale (CARS) analysis. The following measures were assessed before and after the regimens: blood levels of three oxidative markers; Malondialdehyde (MDA), glutathione peroxidase (GPX1), and superoxide dismutase (SOD), nutritional, dietary assessment & anthropometric measurements Results: A significant reduction in the mean score of autism was detected based on CARS scores for those on dates’ regimens compared to those on non-dates (p < 0.01). Participants on 5 dates’ fruits/day for three months showed the highest improvement for autism severity based on both CARS and DSM5 compared to those in 3 dates’ fruits/day and non-dates groups. Responders to dates fruits intake as reflected on the Improvement of autism severity based on CARS diagnosis was detected among 78.7 % and 62.9 % based on CARS and DSM5 diagnosis, respectively. Responders had significant improvement in BMI z score and in the ratio levels of both MDA/SOD and MDA/GPX. Conclusion: The positive results of this study suggest that palm dates fruits could be recommended for children with ASD as adjuvant therapy on a daily regular basis to achieve consistent improvement of autism symptoms Objective: Investigate the effectiveness of a 90-day Dates fruits consumption fruits on alleviating autism severity symptoms in individuals with ASD and explore the clinical and laboratory characteristics of the subjects affected their response to dates fruits intake. Methodology: The study was a randomized controlled, double-blind for 3-month. 131 autistic Egyptian children aged 3-12 years were enrolled in one of the three groups; 1st: on 3 dates’ fruits/day (47 cases), 2nd: Group II on 5 dates’ fruits/day (42 cases), and 3rd: group III; nondates group (42 cases). Conclusion: The positive results of this study suggest that palm dates fruit (a non-pharmacological and risk-free option) could be recommended for children with ASD as adjuvant therapy on a daily regular basis to achieve consistent improvement of autism symptoms.

Keywords: autism spectrum disorders, palm dates fruits, CARS, DSM5, oxidative markers

Procedia PDF Downloads 66
586 A Qualitative Exploration of the Beliefs and Experiences of HIV-Related Self-Stigma Amongst Young Adults Living with HIV in Zimbabwe

Authors: Camille Rich, Nadine Ferris France, Ann Nolan, Webster Mavhu, Vongai Munatsi

Abstract:

Background and Aim: Zimbabwe has one of the highest HIV rates in the world, with a 12.7% adult prevalence rate. Young adults are a key group affected by HIV, and one-third of all new infections in Zimbabwe are amongst people ages 18-24 years. Stigma remains one of the main barriers to managing and reducing the HIV crisis, especially for young adults. There are several types of stigma, including enacted stigma, the outward discrimination towards someone and self-stigma, the negative self-judgments one has towards themselves. Self-stigma can have severe consequences, including feelings of worthlessness, shame, suicidal thoughts, and avoidance of medical help. This can have detrimental effects on those living with HIV. However, the unique beliefs and impacts of self-stigma amongst key groups living with HIV have not yet been explored. Therefore, the focus of this study is on the beliefs and experiences of HIV-related self-stigma, as experienced by young adults living in Harare, Zimbabwe. Research Methods: A qualitative approach was taken for this study, using sixteen semi-structured interviews with young adults (18-24 years) who are living with HIV in Harare. Participants were conveniently and purposefully sampled as members of Africa, an organization dedicated to young people living with HIV. Interviews were conducted over Zoom due to the COVID-19 pandemic, recorded and then coded using the software NVivo. The data was analyzed using both inductive and deductive Thematic Analysis to find common themes. Results: All of the participants experienced HIV-related self-stigma, and both beliefs and experiences were explored. These negative self-perceptions included beliefs of worthlessness, hopelessness, and negative body image. The young adults described believing they were not good enough to be around HIV negative people or that they could never be loved due to their HIV status. Developing self-stigmatizing thoughts came from internalizing negative cultural values, stereotypes about people living with HIV, and adverse experiences. Three main themes of self-stigmatizing experiences emerged: disclosure difficulties, relationship complications, and being isolated. Fear of telling someone their status, rejection in a relationship, and being excluded by others due to their HIV status contributed to their self-stigma. These experiences caused feelings of loneliness, sadness, shame, fear, and low self-worth. Conclusions: This study explored the beliefs and experiences of HIV-related self-stigma of these young adults. The emergence of negative self-perceptions demonstrated deep-rooted beliefs of HIV-related self-stigma that adversely impact the participants. The negative self-perceptions and self-stigmatizing experiences caused the participants to feel worthless, hopeless, shameful, and alone-negatively impacting their physical and mental health, personal relationships, and sense of self-identity. These results can now be used to pursue interventions to target the specific beliefs and experiences of young adults living with HIV and reduce the adverse consequences of self-stigma.

Keywords: beliefs, HIV, self-stigma, stigma, Zimbabwe

Procedia PDF Downloads 93
585 The Interventricular Septum as a Site for Implantation of Electrocardiac Devices - Clinical Implications of Topography and Variation in Position

Authors: Marcin Jakiel, Maria Kurek, Karolina Gutkowska, Sylwia Sanakiewicz, Dominika Stolarczyk, Jakub Batko, Rafał Jakiel, Mateusz K. Hołda

Abstract:

Proper imaging of the interventricular septum during endocavital lead implantation is essential for successful procedure. The interventricular septum is located oblique to the 3 main body planes and forms angles of 44.56° ± 7.81°, 45.44° ± 7.81°, 62.49° (IQR 58.84° - 68.39°) with the sagittal, frontal and transverse planes, respectively. The optimal left anterior oblique (LAO) projection is to have the septum aligned along the radiation beam and will be obtained for an angle of 53.24° ± 9,08°, while the best visualization of the septal surface in the right anterior oblique (RAO) projection is obtained by using an angle of 45.44° ± 7.81°. In addition, the RAO angle (p=0.003) and the septal slope to the transverse plane (p=0.002) are larger in the male group, but the LAO angle (p=0.003) and the dihedral angle that the septum forms with the sagittal plane (p=0.003) are smaller, compared to the female group. Analyzing the optimal RAO angle in cross-sections lying at the level of the connections of the septum with the free wall of the right ventricle from the front and back, we obtain slightly smaller angle values, i.e. 41.11° ± 8.51° and 43.94° ± 7.22°, respectively. As the septum is directed leftward in the apical region, the optimal RAO angle for this area decreases (16.49° ± 7,07°) and does not show significant differences between the male and female groups (p=0.23). Within the right ventricular apex, there is a cavity formed by the apical segment of the interventricular septum and the free wall of the right ventricle with a depth of 12.35mm (IQR 11.07mm - 13.51mm). The length of the septum measured in longitudinal section, containing 4 heart cavities, is 73.03mm ± 8.06mm. With the left ventricular septal wall formed by the interventricular septum in the apical region at a length of 10.06mm (IQR 8.86 - 11.07mm) already lies outside the right ventricle. Both mentioned lengths are significantly larger in the male group (p<0.001). For proper imaging of the septum from the right ventricular side, an oblique position of the visualization devices is necessary. Correct determination of the RAO and LAO angle during the procedure allows to improve the procedure performed, and possible modification of the visual field when moving in the anterior, posterior and apical directions of the septum will avoid complications. Overlooking the change in the direction of the interventricular septum in the apical region and a significant decrease in the RAO angle can result in implantation of the lead into the free wall of the right ventricle with less effective pacing and even complications such as wall perforation and cardiac tamponade. The demonstrated gender differences can also be helpful in setting the right projections. A necessary addition to the analysis will be a description of the area of the ventricular septum, which we are currently working on using autopsy material.

Keywords: anatomical variability, angle, electrocardiological procedure, intervetricular septum

Procedia PDF Downloads 83
584 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 82
583 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins

Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava

Abstract:

The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.

Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles

Procedia PDF Downloads 133
582 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 90
581 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 201
580 Kitchen Bureaucracy: The Preparation of Banquets for Medieval Japanese Royalty

Authors: Emily Warren

Abstract:

Despite the growing body of research on Japanese food history, little has been written about the attitudes and perspectives premodern Japanese people held about their food, even on special celebratory days. In fact, the overall image that arises from the literature is one of ambivalence: that the medieval nobility of the Heian and Kamakura periods (795-1333) did not much care about what they ate and for that reason, food seems relatively scarce in certain historical records. This study challenges this perspective by analyzing the manuals written to guide palace management and feast preparation for royals, introducing two of the sources into English for the first time. This research is primarily based on three manuals that address different aspects of royal food culture and preparation. The Chujiruiki, or Record of the Palace Kitchens (1295), is a fragmentary manual written by a bureaucrat in charge of the main palace kitchen office. This document collection details the utensils, furnishing, and courses that officials organized for the royals’ two daily meals in the morning (asagarei gozen) and in the afternoon (hiru gozen) when they enjoyed seven courses, each one carefully cooked and plated. The orchestration of daily meals and frequent banquets would have been complicated affairs for those preparing the tableware and food, thus requiring texts like the Chûjiruiki, as well as another manual, the Nicchûgyôji (11th c.), or The Daily Functions. Because of the complex coordination between various kitchen-related bureaucratic offices, kitchen officials endeavored to standardize the menus and place settings depending on the time of year, religious abstinence days, and available ingredients flowing into the capital as taxes. For the most important annual banquets and rites celebrating deities and the royal family, kitchen officials would likely refer to the Engi Shiki (927), or Protocols of the Engi Era, for details on offerings, servant payments, and menus. This study proposes that many of the great feast events, and indeed even daily meals at the palace, were so standardized and carefully planned for repetition that there would have been little need for the contents of such feasts to be detailed in diaries or novels—places where historians have noted a lack of the mention of food descriptions. These descriptions were not included for lack of interest on the part of the nobility, but rather because knowledge of what would be served at banquets and feasts would be considered a matter-of-course in the same way that a modern American would likely not need to state the menu of a traditional Thanksgiving meal to an American audience. Where food was concerned, novelty more so than tradition prompted a response in personal records, like diaries.

Keywords: banquets, bureaucracy, Engi shiki, Japanese food

Procedia PDF Downloads 93
579 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 17
578 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 104
577 Impact of Emotional Intelligence and Cognitive Intelligence on Radio Presenter's Performance in All India Radio, Kolkata, India

Authors: Soumya Dutta

Abstract:

This research paper aims at investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance in the All India Radio, Kolkata (India’s public service broadcaster). The ancient concept of productivity is the ratio of what is produced to what is required to produce it. But, father of modern management Peter F. Drucker (1909-2005) defined productivity of knowledge work and knowledge workers in a new form. In the other hand, the concept of Emotional Intelligence (EI) originated back in 1920’s when Thorndike (1920) for the first time proposed the emotional intelligence into three dimensions, i.e., abstract intelligence, mechanical intelligence, and social intelligence. The contribution of Salovey and Mayer (1990) is substantive, as they proposed a model for emotional intelligence by defining EI as part of the social intelligence, which takes measures the ability of an individual to regulate his/her personal and other’s emotions and feeling. Cognitive intelligence illustrates the specialization of general intelligence in the domain of cognition in ways that possess experience and learning about cognitive processes such as memory. The outcomes of past research on emotional intelligence show that emotional intelligence has a positive effect on social- mental factors of human resource; positive effects of emotional intelligence on leaders and followers in terms of performance, results, work, satisfaction; emotional intelligence has a positive and significant relationship with the teachers' job performance. In this paper, we made a conceptual framework based on theories of emotional intelligence proposed by Salovey and Mayer (1989-1990) and a compensatory model of emotional intelligence, cognitive intelligence, and job performance proposed by Stephen Cote and Christopher T. H. Miners (2006). For investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance, sample size consists 59 radio presenters (considering gender, academic qualification, instructional mood, age group, etc.) from All India Radio, Kolkata station. Questionnaires prepared based on cognitive (henceforth called C based and represented by C1, C2,.., C5) as well as emotional intelligence (henceforth called E based and represented by E1, E2,., E20). These were sent to around 59 respondents (Presenters) for getting their responses. Performance score was collected from the report of program executive of All India Radio, Kolkata. The linear regression has been carried out using all the E-based and C-based variables as the predictor variables. The possible problem of autocorrelation has been tested by having the Durbinson-Watson (DW) Statistic. Values of this statistic, almost within the range of 1.80-2.20, indicate the absence of any significant problem of autocorrelation. The possible problem of multicollinearity has been tested by having the Variable Inflation Factor (VIF) value. Values of this statistic, around within 2, indicates the absence of any significant problem of multicollinearity. It is inferred that the performance scores can be statistically regressed linearly on the E-based and C-based scores, which can explain 74.50% of the variations in the performance.

Keywords: cognitive intelligence, emotional intelligence, performance, productivity

Procedia PDF Downloads 138
576 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 281
575 Newspaper Headlines as Tool for Political Propaganda in Nigeria: Trend Analysis of Implications on Four Presidential Elections

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of the media in political discourse cannot be overemphasized as they form an important part of societal development. The media institution is considered the fourth estate of the realm because it serves as a check and balance to the arms of government (Executive, Legislature and Judiciary) especially in a democratic setup, and makes public office holders accountable to the people. They scrutinize the political candidates and conduct a holistic analysis of the achievement of the government in order to make the people’s representative accountable to the electorates. The media in Nigeria play a seminal role in shaping how people vote during elections. Newspaper headlines are catchy phrases that easily capture the attention of the audience and call them (audience) to action. Research conducted on newspaper headlines looks at the linguistic aspect and how the tenses used has a resultant effect on peoples’ attitude and behaviour. Communication scholars have also conducted studies that interrogate whether newspaper headlines influence peoples' voting patterns and decisions. Propaganda and negative stories about political opponents are stapling features in electioneering campaigns. Nigerian newspaper readers have the characteristic of scanning newspaper headlines. And the question is whether politicians effectively have played into this tendency to brand opponents negatively, based on half-truths and inadequate information. This study illustrates major trends in the Nigerian political landscape looking at the past four presidential elections and frames the progress of the research in the extant body of political propaganda research in Africa. The study will use the quantitative content analysis of newspaper headlines from 2007 to 2019 to be able to ascertain whether newspaper headlines had any effect on the election results of the presidential elections during these years. This will be supplemented by Key Informant Interviews of political scientists or experts to draw further inferences from the quantitative data. Drawing on newspaper headlines of selected newspapers in Nigeria that have a political propaganda angle for the presidential elections, the analysis will correspond to and complements extant descriptions of how the field of political propaganda has been developed in Nigeria, providing evidence of four presidential elections that have shaped Nigerian politics. Understanding the development of the behavioural change of the electorates provide useful context for trend analysis in political propaganda communication. The findings will contribute to how newspaper headlines are used partly or wholly to decide the outcome of presidential elections in Nigeria.

Keywords: newspaper headlines, political propaganda, presidential elections, trend analysis

Procedia PDF Downloads 212
574 Transformation of the Relationship Between Tourism Activities and Residential Environment in the Center of a Historical Suburban City of a Tourism Metropolis: A Case Study of Naka-Uji Area, Uji City, Kyoto Prefecture

Authors: Shuailing Cui, Nakajiam Naoto

Abstract:

The tourism industry has experienced significant growth worldwide since the end of World War II. Tourists are drawn to suburban areas during weekends and holidays to explore historical and cultural heritage sites. Since the 1970s, there has been a resurgence in population growth in metropolitan areas, which has fueled the demand for suburban tourism and facilitated its development. The construction of infrastructure, such as railway lines and arterial roads, has also supported the growth of tourism. Tourists engaging in various activities can have a significant impact on the destinations they visit. Tourism has not only affected the local economy but has also begun to alter the social structures, culture, and lifestyle of the destinations visited. In addition, the growing number of tourists has affected the local commercial structure and daily life of suburban residents. Therefore, there is a need to figure out how tourism activities influence the residential environment of the tourist destination and how this influence changes over time. This study aims to analyze the transformation of the relationship between tourism activities and the residential environment in the Naka-Uji area of Uji City, Kyoto Prefecture. Specifically, it investigates how the growth of the tourism industry has influenced the local residential environment and how this influence has changed over time. The findings of the study indicate that the growth of tourism in the Naka-Uji area has had both positive and negative effects on the local residential environment. On the one hand, the tourism industry has created job opportunities and improved local economic conditions. On the other hand, it has also caused environmental degradation, particularly in terms of increased traffic and the construction of parking lots. The study also found that the development of the tourism industry has influenced the social structures, culture, and lifestyle of residents. For instance, the increase in the number of tourists has led to changes in the commercial structure and daily life of suburban residents. The study highlights the importance of collaboration and shared benefits among stakeholders in tourism development, particularly in terms of preserving the cultural and natural heritage of tourist destinations while promoting sustainable development. Overall, this study contributes to the growing body of research on the impact of tourism on suburban areas. It provides insights into the complex relationships between tourism, the natural environment, the local economy, and residential life and emphasizes the need for sustainable tourism development in suburban areas. The findings of this study have important implications for policymakers, urban planners, and other stakeholders involved in promoting regional revitalization and sustainable tourism development.

Keywords: tourism, residential environment, suburban area, metropolis

Procedia PDF Downloads 56