Search results for: evidence based practice
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32173

Search results for: evidence based practice

793 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 214
792 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 370
791 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 279
790 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 203
789 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 603
788 Neutrophil-to-Lymphocyte Ratio: A Predictor of Cardiometabolic Complications in Morbid Obese Girls

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is a low-grade inflammatory state. Childhood obesity is a multisystem disease, which is associated with a number of complications as well as potentially negative consequences. Gender is an important universal risk factor for many diseases. Hematological indices differ significantly by gender. This should be considered during the evaluation of obese children. The aim of this study is to detect hematologic indices that differ by gender in morbid obese (MO) children. A total of 134 MO children took part in this study. The parents filled an informed consent form and the approval from the Ethics Committee of Namik Kemal University was obtained. Subjects were divided into two groups based on their genders (64 females aged 10.2±3.1 years and 70 males aged 9.8±2.2 years; p ≥ 0.05). Waist-to-hip as well as head-to-neck ratios and body mass index (BMI) values were calculated. The children, whose WHO BMI-for age and sex percentile values were > 99 percentile, were defined as MO. Hematological parameters [haemoglobin, hematocrit, erythrocyte count, mean corpuscular volume, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentration, red blood cell distribution width, leukocyte count, neutrophil %, lymphocyte %, monocyte %, eosinophil %, basophil %, platelet count, platelet distribution width, mean platelet volume] were determined by the automatic hematology analyzer. SPSS was used for statistical analyses. P ≤ 0.05 was the degree for statistical significance. The groups included children having mean±SD value of BMI as 26.9±3.4 kg/m2 for males and 27.7±4.4 kg/m2 for females (p ≥ 0.05). There was no significant difference between ages of females and males (p ≥ 0.05). Males had significantly increased waist-to-hip ratios (0.95±0.08 vs 0.91±0.08; p=0.005) and mean corpuscular hemoglobin concentration values (33.6±0.92 vs 33.1±0.83; p=0.001) compared to those of females. Significantly elevated neutrophil (4.69±1.59 vs 4.02±1.42; p=0.011) and neutrophil-to-lymphocyte ratios (1.70±0.71 vs 1.39±0.48; p=0.004) were detected in females. There was no statistically significant difference between groups in terms of C-reactive protein values (p ≥ 0.05). Adipose tissue plays important roles during the development of obesity and associated diseases such as metabolic syndrom and cardiovascular diseases (CVDs). These diseases may cause changes in complete blood cell count parameters. These alterations are even more important during childhood. Significant gender effects on the changes of neutrophils, one of the white blood cell subsets, were observed. The findings of the study demonstrate the importance of considering gender in clinical studies. The males and females may have distinct leukocyte-trafficking profiles in inflammation. Female children had more circulating neutrophils, which may be the indicator of an increased risk of CVDs, than male children within this age range during the late stage of obesity. In recent years, females represent about half of deaths from CVDs; therefore, our findings may be the indicator of the increasing tendency of this risk in females starting from childhood.

Keywords: children, gender, morbid obesity, neutrophil-to-lymphocyte ratio

Procedia PDF Downloads 257
787 The Effect of Manure Loaded Biochar on Soil Microbial Communities

Authors: T. Weber, D. MacKenzie

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.

Keywords: cattle, biochar, manure, microbial activity

Procedia PDF Downloads 83
786 Inputs and Outputs of Innovation Processes in the Colombian Services Sector

Authors: Álvaro Turriago-Hoyos

Abstract:

Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.

Keywords: Colombia, determinants of innovation, innovation, services sector

Procedia PDF Downloads 251
785 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil

Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak

Abstract:

Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.

Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization

Procedia PDF Downloads 240
784 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 246
783 Examining the Usefulness of an ESP Textbook for Information Technology: Learner Perspectives

Authors: Yun-Husan Huang

Abstract:

Many English for Specific Purposes (ESP) textbooks are distributed globally as the content development is often obliged to compromises between commercial and pedagogical demands. Therefore, the issue of regional application and usefulness of globally published ESP textbooks has received much debate. For ESP instructors, textbook selection is definitely a priority consideration for curriculum design. An appropriate ESP textbook can facilitate teaching and learning, while an inappropriate one may cause a disaster for both teachers and students. This study aims to investigate the regional application and usefulness of an ESP textbook for information technology (IT). Participants were 51 sophomores majoring in Applied Informatics and Multimedia at a university in Taiwan. As they were non-English majors, their English proficiency was mostly at elementary and elementary-to-intermediate levels. This course was offered for two semesters. The textbook selected was Oxford English for Information Technology. At class end, the students were required to complete a survey comprising five choices of Very Easy, Easy, Neutral, Difficult, and Very Difficult for each item. Based on the content design of the textbook, the survey investigated how the students viewed the difficulty of grammar, listening, speaking, reading, and writing materials of the textbook. In terms of difficulty, results reveal that only 22% of them found the grammar section difficult and very difficult. For listening, 71% responded difficult and very difficult. For general reading, 55% responded difficult and very difficult. For speaking, 56% responded difficult and very difficult. For writing, 78% responded difficult and very difficult. For advanced reading, 90% reported difficult and very difficult. These results indicate that, except the grammar section, more than half of the students found the textbook contents difficult in terms of listening, speaking, reading, and writing materials. Such contradictory results between the easy grammar section and the difficult four language skills sections imply that the textbook designers do not well understand the English learning background of regional ESP learners. For the participants, the learning contents of the grammar section were the general grammar level of junior high school, while the learning contents of the four language skills sections were more of the levels of college English majors. Implications from the findings are obtained for instructors and textbook designers. First of all, existing ESP textbooks for IT are few and thus textbook selections for instructors are insufficient. Second, existing globally published textbooks for IT cannot be applied to learners of all English proficiency levels, especially the low level. With limited textbook selections, third, instructors should modify the selected textbook contents or supplement extra ESP materials to meet the proficiency level of target learners. Fourth, local ESP publishers should collaborate with local ESP instructors who understand best the learning background of their students in order to develop appropriate ESP textbooks for local learners. Even though the instructor reduced learning contents and simplified tests in curriculum design, in conclusion, the students still found difficult. This implies that in addition to the instructor’s professional experience, there is a need to understand the usefulness of the textbook from learner perspectives.

Keywords: ESP textbooks, ESP materials, ESP textbook design, learner perspectives on ESP textbooks

Procedia PDF Downloads 320
782 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 84
781 Preparing Young Adults with Disabilities for Lifelong Inclusivity through a College Level Mentor Program Using Technology: An Exploratory Study

Authors: Jenn Gallup, Onur Kocaoz, Onder Islek

Abstract:

In their pursuit of postsecondary transitions, individuals with disabilities tend to experience, academic, behavioral, and emotional challenges to a greater extent than their typically developing peers. These challenges result in lower rates of graduation, employment, independent living, and participation in college than their peers without disabilities. The lack of friendships and support systems has had a negative impact on those with a disability transitioning to postsecondary settings to include, employment, independent living, and university settings. Establishing friendships and support systems early on is an indicator of potential success and persistence in postsecondary education, employment, and independent living for typically developing college students. It is evident that a deficit in friendships and supports is a key deficit also for individuals with disabilities. To address the specific needs of this group, a mentor program was developed for a transition program held at the university for youth aged 18-21. Pre-service teachers enrolled in the special education program engaged with youth in the transition program in a variety of activities on campus. The mentorship program had two purposes: to assist young adults with disabilities who were transitioning to a workforce setting to help increase social skills, self-advocacy, supports and friendships, and confidence; and to give their peers without disabilities who were enrolled in a secondary special education course as a pre-service teacher the experience of interacting with and forming friendships with peers who had a disability for the purposes of career development. Additionally, according to researchers mobile technology has created a virtual world of equality and opportunity for a large segment of the population that was once marginalized due to physical and cognitive impairments. All of the participants had access to smart phones; therefore, technology was explored during this study to determine if it could be used as a compensatory tool to allow the young adults with disabilities to do things that otherwise would have been difficult because of their disabilities. Additionally, all participants were asked to incorporate technology such as smart phones to communicate beyond the activities, collaborate using virtual platform games which would support and promote social skills, soft-skills, socialization, and relationships. The findings of this study confirmed that a peer mentorship program that harnessed the power of technology supported outcomes specific to young adults with and without disabilities. Mobile technology and virtual game-based platforms, were identified as a significant contributor to personal, academic, and career growth for both groups. The technology encouraged friendships, provided an avenue for rich social interactions, and increased soft-skills. Results will be shared along with the development of the program and potential implications to the field.

Keywords: career outcomes, mentorship, soft-skills, technology, transition

Procedia PDF Downloads 141
780 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 39
779 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.

Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq

Procedia PDF Downloads 157
778 Issues and Influences in Academic Choices among Communication Students in Oman

Authors: Bernard Nnamdi Emenyeonu

Abstract:

The study of communication as a fully-fledged discipline in institutions of higher education in the Sultanate of Oman is relatively young. Its evolution is associated with Oman's Renaissance beginning from 1970, which ushered in an era of modernization in which education, industrialization, expansion, and liberalization of the mass media, provision of infrastructure, and promotion of multilateral commercial ventures were considered among the top priorities of national development plans. Communication studies were pioneered by the sole government university, Sultan Qaboos University, in the 1990s, but so far, the program is taught in Arabic only. In recognition of the need to produce professionals suitably equipped to fit into the expanding media establishments in the Sultanate as well as the widening global market, the government decided to establish programs in which communication would be taught in English language. Under the supervision of the Ministry of Higher Education, six Colleges of Applied Sciences were established in Oman in 2007. These colleges offer a 4-year Bachelor degree program in communication studies that comprises six areas of specialization: Advertising, Digital Media, International Communication, Journalism, Media Management and Public Relations. Over the years, a trend has emerged where students tend to flock to particular specializations such as Public Relations and Digital Media, while others, such as Advertising and Journalism, continue to draw the least number of students. In some instances, some specializations have had to be frozen due to the dire lack of interest among new students. It has also been observed that female students are more likely to be more biased in choice of specializations. It was therefore the task of this paper to establish, through a survey and focus group interviews, the factors that influence choice of communication studies as well as particular specializations, among Omani Communication Studies undergraduates. Results of the study show that prior to entering into the communication studies program, the majority of students had no idea of what the field entailed. Whatever information they had about communication studies was sourced from friends and relatives rather than more reliable sources such as career fairs or guidance counselors. For the most part, the choice of communication studies as a major was also influenced by factors such as family, friends and prospects for jobs. Another significant finding is the strong association between gender and choice of specializations within the program, with females flocking to digital media while males tended to prefer public relations. Reasons for specialization preferences dwelt strongly on expectations of a good GPA and the promise of a good salary after graduation. Regardless of gender, most students identified careers in news reporting, public relations and advertising as unsuitable for females. Teaching and program presentation were identified as the most suitable for females. Based on these and other results, the paper not only examined the social and cultural factors that are likely to have influenced the respondent's attitude to communication studies, but also discussed the implication for curriculum development and career development in a developing society such as Oman.

Keywords: career choice, communication specialization, media education, Oman

Procedia PDF Downloads 216
777 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 43
776 Isolation and Structural Elucidation of 20 Hydroxyecdystone from Vitex doniana Sweet Stem Bark

Authors: Mustapha A. Tijjani, Fanna I. Abdulrahman, Irfan Z. Khan, Umar K. Sandabe, Cong Li

Abstract:

Air dried sample V. doniana after collection and identification was extracted with ethanol and further partition with chloroform, ethyl acetate and n-butanol. Ethanolic extract (11.9g) was fractionated on a silica gel accelerated column chromatography using solvents such as n-hexane, ethyl acetate and methanol. Each eluent fractions (150ml aliquots) were collected and monitored with thin layer chromatography. Fractions with similar Rf values from same solvents system were pooled together. Phytochemical test of all the fractions were performed using standard procedure. Complete elution yielded 48 fractions (150ml/fraction) which were pooled to 24 fractions base on the Rf values. It was further recombined and 12 fractions were obtained on the basis on Rf values and coded Vd1 to Vd12 fractions. Vd8 was further eluted with ethylacetate and methanol and gave fourteen sub fractions Vd8-a, -Vd8-m. Fraction Vd8-a (56mg) gave a white crystal compound coded V1. It was further checked on TLC and observed under ultraviolet lamp and was found to give a single spot. The Rf values were calculated to be 0.433. The melting point was determined using Gallenkamp capillary melting point apparatus and found to be 241-243°C uncorrected. Characterization of the isolated compound coded V1 was done using FT-infra-red spectroscopy, HNMR, 13CNMR(1and 2D) and HRESI-MS. The IR spectrum of compound V1 shows prominent peaks that corresponds to OHstr (3365cm-1) and C=0 (1652cm-1) etc. This spectrum suggests that among the functional moiety in compound V1 are the carbonyl and hydroxyl group. The 1H NMR (400 MHz) spectrum of compound V1 in DMSO-d6 displayed five singlet signals at δ 0.72 (3H, s, H-18), 0.79 (3H, s, H-19), 1.03 (3H, s, H-21), 1.04 (3H, s, H-26), 1.06 (3H, s, H-27) each integrating for three protons indicating the five methyl functional groups present in the compound. It further showed a broad singlet at δ 5.58 integrated for 1 H due to an olefinic H-atom adjacent to the carbonyl carbon atom. Three signals at δ 3.10 (d, J = 9.0 Hz, H-22), 3.59 (m, 1H, 2H-a) and 3.72 (m, 1H, 3H-e), each integrating for one proton is due to oxymethine protons indicating that three oxymethine H-atoms are present in the compound. These all signals are characteristic to the ecdysteroid skeletons. The 13C-NMR spectrum showed the presence of 27 carbon atoms, suggesting that may be steroid skeleton. The DEPT-135 experiment showed the presence of five CH3, eight CH2, and seven CH groups, and seven quaternary C-atoms. The molecular formula was established as C27H44O7 by high resolution electron spray ionization-mass spectroscopy (HRESI-MS) positive ion mode m/z 481.3179. The signals in mass spectrum are 463, 445, and 427 peaks corresponding to losses of one, two, three, or four water molecules characteristic for ecdysterone skeleton reported in the literature. Based on the spectral analysis (HNMR, 13CNMR, DEPT, HMQC, IR, HRESI-MS) the compound V1 is thus concluded to have ecdysteriod skeleton and conclusively conforms with 2β, 3β 14α, 20R, 22R, 25-hexahydroxy-5 β cholest-7-ene-6- one, or 2, 3, 14, 20, 22, 25 hexahydroxy cholest-7-ene-6-one commonly known as 20-hydroxyecdysone.

Keywords: vitex, phytochemical, purification, isolation, chromatography, spectroscopy

Procedia PDF Downloads 333
775 Empowering Indigenous Epistemologies in Geothermal Development

Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui

Abstract:

Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.

Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework

Procedia PDF Downloads 168
774 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 220
773 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador

Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez

Abstract:

The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.

Keywords: collection points, Jatropha curcas, linear programming, supply chain

Procedia PDF Downloads 411
772 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education

Authors: Sandra Arango-Caro, Kristine Callis-Duehl

Abstract:

The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.

Keywords: assessment, augmented reality, education, plant science, virtual reality

Procedia PDF Downloads 152
771 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 56
770 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System

Authors: Mozhdeh Shirinzadeh, Richard Stroetmann

Abstract:

Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.

Keywords: composite action, fatigue, finite element method, steel deck, bridge

Procedia PDF Downloads 57
769 Symptom Burden and Quality of Life in Advanced Lung Cancer Patients

Authors: Ammar Asma, Bouafia Nabiha, Dhahri Meriem, Ben Cheikh Asma, Ezzi Olfa, Chafai Rim, Njah Mansour

Abstract:

Despite recent advances in treatment of the lung cancer patients, the prognosis remains poor. Information is limited regarding health related quality of life (QOL) status of advanced lung cancer patients. The purposes of this study were: to assess patient reported symptom burden, to measure their QOL, and to identify determinant factors associated with QOL. Materials/Methods: A cross sectional study of 60 patients was carried out from over the period of 03 months from February 1st to 30 April 2016. Patients were recruited in two department of health care: Pneumology department in a university hospital in Sousse and an oncology unit in a University Hospital in Kairouan. Patients with advanced stage (III and IV) of lung cancer who were hospitalized or admitted in the day hospital were recruited by convenience sampling. We used a questionnaire administrated and completed by a trained interviewer. This questionnaire is composed of three parts: demographic, clinical and therapeutic information’s, QOL measurements: based on the SF-36 questionnaire, Symptom’s burden measurement using the Lung Cancer Symptom Scale (LCSS). To assess Correlation between symptoms burden and QOL, we compared the scores of two scales two by two using the Pearson correlation. To identify factors influencing QOL in Lung cancer, a univariate statistical analysis then, a stepwise backward approach, wherein the variables with p< 0.2, were carried out to determine the association between SF-36 scores and different variables. Results: During the study period, 60 patients consented to complete symptom and quality of life questionnaires at a single point time (72% were recruited from day hospital). The majority of patients were male (88%), age ranged from 21 to 79 years with a mean of 60.5 years. Among patients, 48 (80%) were diagnosed as having non-small cell lung carcinoma (NSCLC). Approximately, 60 % (n=36) of patients were in stage IV, 25 % in stage IIIa and 15 % in stage IIIb. For symptom burden, the symptom burden index was 43.07 (Standard Deviation, 21.45). Loss of appetite and fatigue were rated as the most severe symptoms with mean scores (SD): 49.6 (25.7) and 58.2 (15.5). The average overall score of SF36 was 39.3 (SD, 15.4). The physical and emotional limitations had the lowest scores. Univariate analysis showed that factors which influence negatively QOL were: married status (p<0.03), smoking cessation after diagnosis (p<0.024), LCSS total score (p<0.001), LCSS symptom burden index (p<0.001), fatigue (p<0.001), loss of appetite (p<0.001), dyspnea (p<0.001), pain (p<0.002), and metastatic stage (p<0.01). In multivariate analysis, unemployment (p<0.014), smoking cessation after diagnosis (p<0.013), consumption of analgesic (p<0.002) and the indication of an analgesic radiotherapy (p<0.001) are revealed as independent determinants of QOL. The result of the correlation analyses between total LCSS scores and the total and individual domain SF36 scores was significant (p<0.001); the higher total LCSS score is, the poorer QOL is. Conclusion: A built in support of lung cancer patients would better control the symptoms and promote the QOL of these patients.

Keywords: quality of life, lung cancer, metastasis, symptoms burden

Procedia PDF Downloads 368
768 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests

Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski

Abstract:

Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.

Keywords: bone, creep, indentation, mechanical properties

Procedia PDF Downloads 149
767 Social Factors That Contribute to Promoting and Supporting Resilience in Children and Youth following Environmental Disasters: A Mixed Methods Approach

Authors: Caroline McDonald-Harker, Julie Drolet

Abstract:

Abstract— In the last six years Canada In the last six years Canada has experienced two major and catastrophic environmental disasters– the 2013 Southern Alberta flood and the 2016 Fort McMurray, Alberta wildfire. These two disasters resulted in damages exceeding 12 billion dollars, the costliest disasters in Canadian history. In the aftermath of these disasters, many families faced the loss of homes, places of employment, schools, recreational facilities, and also experienced social, emotional, and psychological difficulties. Children and youth are among the most vulnerable to the devastating effects of disasters due to the physical, cognitive, and social factors related to their developmental life stage. Yet children and youth also have the capacity to be resilient and act as powerful catalyst for change in their own lives and wider communities following disaster. Little is known, particularly from a sociological perspective, about the specific factors that contribute to resilience in children and youth, and effective ways to support their overall health and well-being. This paper focuses on the voices and experiences of children and youth residing in these two disaster-affected communities in Alberta, Canada and specifically examines: 1) How children and youth’s lives are impacted by the tragedy, devastation, and upheaval of disaster; 2) Ways that children and youth demonstrate resilience when directly faced with the adversarial circumstances of disaster; and 3) The cumulative internal and external factors that contribute to bolstering and supporting resilience among children and youth post-disaster. This paper discusses the characteristics associated with high levels of resilience in 183 children and youth ages 5 to 17 based on quantitative and qualitative data obtained through a mix methods approach. Child and youth participants were administered the Children and Youth Resilience Measure (CYRM-28) in order to examine factors that influence resilience processes including: individual, caregiver, and context factors. The CYRM-28 was then supplemented with qualitative interviews with children and youth to contextualize the CYRM-28 resiliency factors and provide further insight into their overall disaster experience. Findings reveal that high levels of resilience among child and youth participants is associated with both individual factors and caregiver factors, specifically positive outlook, effective communication, peer support, and physical and psychological caregiving. Individual and caregiver factors helped mitigate the negative effects of disaster, thus bolstering resilience in children and youth. This paper discusses the implications that these findings have for understanding the specific mechanisms that support the resiliency processes and overall recovery of children and youth following disaster; the importance of bridging the gap between children and youth’s needs and the services and supports provided to them post-disaster; and the need to develop resiliency processes and practices that empower children and youth as active agents of change in their own lives following disaster. These findings contribute to furthering knowledge about pragmatic and representative changes to resources, programs, and policies surrounding disaster response, recovery, and mitigation.

Keywords: children and youth, disaster, environment, resilience

Procedia PDF Downloads 105
766 Networked Media, Citizen Journalism and Political Participation in Post-Revolutionary Tunisia: Insight from a European Research Project

Authors: Andrea Miconi

Abstract:

The research will focus on the results of the Tempus European Project eMEDia dedicated to Cross-Media Journalism. The project is founded by the European Commission as it involves four European partners - IULM University, Tampere University, University of Barcelona, and the Mediterranean network Unimed - and three Tunisian Universities – IPSI La Manouba, Sfax and Sousse – along with the Tunisian Ministry for Higher Education and the National Syndicate of Journalists. The focus on Tunisian condition is basically due to the role played by digital activists in its recent history. The research is dedicated to the relationship between political participation, news-making practices and the spread of social media, as it is affecting Tunisian society. As we know, Tunisia during the Arab Spring had been widely considered as a laboratory for the analysis the use of new technologies for political participation. Nonetheless, the literature about the Arab Spring actually fell short in explaining the genesis of the phenomenon, on the one hand by isolating technologies as a casual factor in the spread of demonstrations, and on the other by analyzing North-African condition through a biased perspective. Nowadays, it is interesting to focus on the consolidation of the information environment three years after the uprisings. And what is relevant, only a close, in-depth analysis of Tunisian society is able to provide an explanation of its history, and namely of the part of digital media in the overall evolution of political system. That is why the research is based on different methodologies: desk stage, interviews, and in-depth analysis of communication practices. Networked journalism is the condition determined by the technological innovation on news-making activities: a condition upon which professional journalist can no longer be considered the only player in the information arena, and a new skill must be developed. Along with democratization, nonetheless, the so-called citizen journalism is also likely to produce some ambiguous effects, such as the lack of professional standards and the spread of information cascades, which may prove to be particularly dangerous in an evolving media market as the Tunisian one. This is why, according to the project, a new profile must be defined, which is able to manage this new condition, and which can be hardly reduced to the parameters of traditional journalistic work. Rather than simply using new devices for news visualization, communication professionals must also be able to dialogue with all new players and to accept the decentralized nature of digital environments. This networked nature of news-making seemed to emerge during the Tunisian revolution, when bloggers, journalists, and activists used to retweet each other. Nonetheless, this intensification of communication exchange was inspired by the political climax of the uprising, while all media, by definition, are also supposed to bring some effects on people’s state of mind, culture and daily life routines. That is why it is worth analyzing the consolidation of these practices in a normal, post-revolutionary situation.

Keywords: cross-media, education, Mediterranean, networked journalism, social media, Tunisia

Procedia PDF Downloads 175
765 Prevalence and Associated Risk Factors of Age-Related Macular Degeneration in the Retina Clinic at a Tertiary Center in Makkah Province, Saudi Arabia: A Retrospective Record Review

Authors: Rahaf Mandura, Fatmah Abusharkh, Layan Kurdi, Rahaf Shigdar, Khadijah Alattas

Abstract:

Introduction: Age-related macular degeneration (AMD) in older individuals are serious health issues that severely impact the quality of life of millions globally. In 2020, the fourth leading cause of blindness worldwide was AMD. The global prevalence of AMD is estimated to be around 8.7%. AMD is a progressive disease involving the macular region of the retina, and it has a complex pathophysiology. RPE cell dysfunction plays a crucial step in the pathway leading to irreversible degeneration of photoreceptors with yellowish lipid-rich, protein-containing drusen deposits accumulating between Bruch's membrane and the RPE. Furthermore, lipofuscinogenesis, drusogenesis, inflammation, and neovascularization are four main processes responsible for the formation of the two types of AMD: the wet (exudative, neovascular) and dry (non-exudative, geographic atrophy) types. We retrospectively evaluated the prevalence of AMD among patients visiting the retina clinic at King Abdulaziz University Hospital (Jeddah, Makkah Province, Saudi Arabia) to identify the commonly associated risk factors of AMD. Methods: The records of 3,067 individuals from 2017 to 2021 were reviewed. Of these, 1,935 satisfied the inclusion criteria and were included in this study. We excluded all patient below 18 years, and those who did not undergo fundus imaging or attend their booked appointments, follow-ups, treatments, and referrals were excluded. Results: The prevalence of AMD among the patients was 4%. The age of patients with AMD was significantly greater than those without AMD (72.4 ± 9.8 years vs. 57.2 ± 15.5 years; p < 0.001). Participants with a family history of AMD tended to have the disease more than those without such a history (85.7% vs. 45%; p = 0.043). Ex- and current smokers were more likely to have AMD than non-smokers (34% and 18.6% vs. 7.2%; p < 0.001). Patients with hypertension and those without type 1 diabetes were at a higher risk of developing AMD than those without hypertension (5.5% vs. 2.8%; p = 0.002) and those with type 1 diabetes (4.2% vs. 0.8%; p = 0.040). In contrast, sex, nationality, type 2 diabetes, and abnormal lipid profile were not significantly associated with AMD. Regarding the clinical characteristics of AMD cases, most cases (70.4%) were of the dry type and affected both eyes (77.2%). The disease duration was ≥5 years in 43.1% of the patients. The most frequent chronic diseases associated with AMD were type 2 diabetes (69.1%), hypertension (61.7%), and dyslipidemia (18.5%). Conclusion: In summary, our single tertiary center study showed that AMD is widely prevalent in Jeddah, Saudi Arabia (4%) and linked to a wide range of risk factors. Some of these are modifiable risk factors that can be adjusted to help reduce AMD occurrence. Furthermore, this study has shown the importance of screening and follow-up of family members of patients with AMD to promote early detection and intervention of AMD. We recommend conducting further research on AMD in Saudi Arabia. Concerning the study design, a community-based cross-sectional study would be more helpful for assessing the disease's prevalence. Finally, recruiting a larger sample size is required for more accurate estimation.

Keywords: age related macular degeneration, prevelence, risk factor, dry AMD

Procedia PDF Downloads 8
764 Forced Migrants in Israel and Their Impact on the Urban Structure of Southern Neighborhoods of Tel Aviv

Authors: Arnon Medzini, Lilach Lev Ari

Abstract:

Migration, the driving force behind increased urbanization, has made cities much more diverse places to live in. Nearly one-fifth of all migrants live in the world’s 20 largest cities. In many of these global cities, migrants constitute over a third of the population. Many of contemporary migrants are in fact ‘forced migrants,’ pushed from their countries of origin due to political or ethnic violence and persecution or natural disasters. During the past decade, massive numbers of labor migrants and asylum seekers have migrated from African countries to Israel via Egypt. Their motives for leaving their countries of origin include ongoing and bloody wars in the African continent as well as corruption, severe conditions of poverty and hunger, and economic and political disintegration. Most of the African migrants came to Israel from Eritrea and Sudan as they saw Israel the closest natural geographic asylum to Africa; soon they found their way to the metropolitan Tel-Aviv area. There they concentrated in poor neighborhoods located in the southern part of the city, where they live under conditions of crowding, poverty, and poor sanitation. Today around 45,000 African migrants reside in these neighborhoods, and yet there is no legal option for expelling them due to dangers they might face upon returning to their native lands. Migration of such magnitude to the weakened neighborhoods of south Tel-Aviv can lead to the destruction of physical, social and human infrastructures. The character of the neighborhoods is changing, and the local population is the main victim. These local residents must bear the brunt of the failure of both authorities and the government to handle the illegal inhabitants. The extremely crowded living conditions place a heavy burden on the dilapidated infrastructures in the weakened areas where the refugees live and increase the distress of the veteran residents of the neighborhoods. Some problems are economic and some stem from damage to the services the residents are entitled to, others from a drastic decline in their standard of living. Even the public parks no longer serve the purpose for which they were originally established—the well-being of the public and the neighborhood residents; they have become the main gathering place for the infiltrators and a center of crime and violence. Based on secondary data analysis (for example: The Israel’s Population, Immigration and Border Authority, the hotline for refugees and migrants), the objective of this presentation is to discuss the effects of forced migration to Tel Aviv on the following tensions: between the local population and the immigrants; between the local population and the state authorities, and between human rights groups vis-a-vis nationalist local organizations. We will also describe the changes which have taken place in the urban infrastructure of the city of Tel Aviv, and discuss the efficacy of various Israeli strategic trajectories when handling human problems arising in the marginal urban regions where the forced migrant population is concentrated.

Keywords: African asylum seekers, forced migrants, marginal urban regions, urban infrastructure

Procedia PDF Downloads 230