Search results for: estimating positions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1364

Search results for: estimating positions

74 Body of Dialectics: Exploring a Dynamic-Adaptational Model of Physical Self-Integrity and the Pursuit of Happiness in a Hostile World

Authors: Noam Markovitz

Abstract:

People with physical disabilities constitute a very large and simultaneously a diverse group of general population, as the term physical disabilities is extensive and covers a wide range of disabilities. Therefore, individuals with physical disabilities are often faced with a new, threatening and stressful reality leading possibly to a multi-crisis in their lives due to the great changes they experience in somatic, socio-economic, occupational and psychological level. The current study seeks to advance understanding of the complex adaptation to physical disabilities by expanding the dynamic-adaptational model of the pursuit of happiness in a hostile world with a new conception of physical self-integrity. Physical self-integrity incorporates an objective dimension, namely physical self-functioning (PSF), and a subjective dimension, namely physical self-concept (PSC). Both of these dimensions constitute an experience of wholeness in the individual’s identification with her or his physical body. The model guiding this work is dialectical in nature and depicts two systems in the individual’s sense of happiness: subjective well-being (SWB) and meaning in life (MIL). Both systems serve as self-adaptive agents that moderate the complementary system of the hostile-world scenario (HWS), which integrates one’s perceived threats to one’s integrity. Thus, in situations of increased HWS, the moderation may take a form of joint activity in which SWB and MIL are amplified or a form of compensation in which one system produces a stronger effect while the other system produces a weaker effect. The current study investigated PSC in relations to SWB and MIL through pleasantness and meanings that are physically or metaphorically grounded in one’s body. In parallel, PSC also relates to HWS by activating representations of inappropriateness, deformation and vulnerability. In view of possibly dialectical positions of opposing and complementary forces within the current model, the current field study that aims to explore PSC as appearing in an independent, cross-sectional, design addressing the model’s variables in a focal group of people with physical disabilities. This study delineated the participation of the PSC in the adaptational functions of SWB and MIL vis-à-vis HWS-related life adversities. The findings showed that PSC could fully complement the main variables of the pursuit of happiness in a hostile world model. The assumed dialectics in the form of a stronger relationship between SWB and MIL in the face of physical disabilities was not supported. However, it was found that when HWS increased, PSC and MIL were strongly linked, whereas PSC and SWB were weakly linked. This highlights the compensatory role of MIL. From a conceptual viewpoint, the current investigation may clarify the role of PSC as an adaptational agent of the individual’s positive health in complementary senses of bodily wholeness. Methodologically, the advantage of the current investigation is the application of an integrative, model-based approach within a specially focused design with a particular relevance to PSC. Moreover, from an applicative viewpoint, the current investigation may suggest how an innovative model may be translated to therapeutic interventions used by clinicians, counselors and practitioners in improving wellness and psychological well-being, particularly among people with physical disabilities.

Keywords: older adults, physical disabilities, physical self-concept, pursuit of happiness in a hostile-world

Procedia PDF Downloads 123
73 Provider Perceptions of the Effects of Current U.S. Immigration Enforcement Policies on Service Utilization in a Border Community

Authors: Isabel Latz, Mark Lusk, Josiah Heyman

Abstract:

The rise of restrictive U.S. immigration policies and their strengthened enforcement has reportedly caused concerns among providers about their inadvertent effects on service utilization among Latinx and immigrant communities. This study presents perceptions on this issue from twenty service providers in health care, mental health, nutrition assistance, legal assistance, and immigrant advocacy in El Paso, Texas. All participants were experienced professionals, with fifteen in CEO, COO, executive director, or equivalent positions, and based at organizations that provide services for immigrant and/or low-income populations in a bi-national border community. Quantitative and qualitative data were collected by two primary investigators via semi-structured telephone interviews with an average length of 20 minutes. A survey script with closed and open-ended questions inquired about participants’ demographic information and perceptions of impacts of immigration enforcement policies under the current federal administration on their work and patient or client populations. Quantitative and qualitative data were analyzed to produce descriptive statistics and identify salient themes, respectively. Nearly all respondents stated that their work has been negatively (N=13) or both positively and negatively (N=5) affected by current immigration enforcement policies. Negative effects were most commonly related to immigration enforcement-related fear and uncertainty among patient or client populations. Positive effects most frequently referred to a sense of increased community organizing and greater cooperation among organizations. Similarly, the majority of service providers either reported an increase (N=8) or decrease (N=6) in service utilization due to changes in immigration enforcement policies. Increased service needs were primarily related to a need for public education about immigration enforcement policy changes, information about how new policies impact individuals’ service eligibility, legal status, and civil rights, as well as a need to correct misinformation. Decreased service utilization was primarily related to fear-related service avoidance. While providers observed changes in service utilization among undocumented immigrants and mixed-immigration status families, in particular, participants also noted ‘spillover’ effects on the larger Latinx community, including legal permanent and temporary residents, refugees or asylum seekers, and U.S. citizens. This study reveals preliminary insights into providers’ widespread concerns about the effects of current immigration enforcement policies on health, social, and legal service utilization among Latinx individuals. Further research is necessary to comprehensively assess impacts of immigration enforcement policies on service utilization in Latinx and immigrant communities. This information is critical to address gaps in service utilization and prevent an exacerbation of health disparities among Latinx, immigrant, and border populations. In a global climate of rising nationalism and xenophobia, it is critical for policymakers to be aware of the consequences of immigration enforcement policies on the utilization of essential services to protect the well-being of minority and immigrant communities.

Keywords: immigration enforcement, immigration policy, provider perceptions, service utilization

Procedia PDF Downloads 117
72 Key Findings on Rapid Syntax Screening Test for Children

Authors: Shyamani Hettiarachchi, Thilini Lokubalasuriya, Shakeela Saleem, Dinusha Nonis, Isuru Dharmaratne, Lakshika Udugama

Abstract:

Introduction: Late identification of language difficulties in children could result in long-term negative consequences for communication, literacy and self-esteem. This highlights the need for early identification and intervention for speech, language and communication difficulties. Speech and language therapy is a relatively new profession in Sri Lanka and at present, there are no formal standardized screening tools to assess language skills in Sinhala-speaking children. The development and validation of a short, accurate screening tool to enable the identification of children with syntactic difficulties in Sinhala is a current need. Aims: 1) To develop test items for a Sinhala Syntactic Structures (S3 Short Form) test on children aged between 3;0 to 5;0 years 2) To validate the test of Sinhala Syntactic Structures (S3 Short Form) on children aged between 3; 0 to 5; 0 years Methods: The Sinhala Syntactic Structures (S3 Short Form) was devised based on the Renfrew Action Picture Test. As Sinhala contains post-positions in contrast to English, the principles of the Renfrew Action Picture Test were followed to gain an information score and a grammar score but the test devised reflected the linguistic-specificity and complexity of Sinhala and the pictures were in keeping with the culture of the country. This included the dative case marker ‘to give something to her’ (/ejɑ:ʈə/ meaning ‘to her’), the instrumental case marker ‘to get something from’ (/ejɑ:gən/ meaning ‘from him’ or /gɑhən/ meaning ‘from the tree’), possessive noun (/ɑmmɑge:/ meaning ‘mother’s’ or /gɑhe:/ meaning ‘of the tree’ or /male:/ meaning ‘of the flower’) and plural markers (/bɑllɑ:/ bɑllo:/ meaning ‘dog/dogs’, /mɑlə/mɑl/ meaning ‘flower/flowers’, /gɑsə/gɑs/ meaning ‘tree/trees’ and /wɑlɑ:kulə/wɑlɑ:kulu/ meaning ‘cloud/clouds’). The picture targets included socio-culturally appropriate scenes of the Sri Lankan New Year celebration, elephant procession and the Buddhist ‘Wesak’ ceremony. The test was piloted with a group of 60 participants and necessary changes made. In phase 1, the test was administered to 100 Sinhala-speaking children aged between 3; 0 and 5; 0 years in one district. In this presentation on phase 2, the test was administered to another 100 Sinhala-speaking children aged between 3; 0 to 5; 0 in three districts. In phase 2, the selection of the test items was assessed via measures of content validity, test-retest reliability and inter-rater reliability. The age of acquisition of each syntactic structure was determined using content and grammar scores which were statistically analysed using t-tests and one-way ANOVAs. Results: High percentage agreement was found on test-retest reliability on content validity and Pearson correlation measures and on inter-rater reliability. As predicted, there was a statistically significant influence of age on the production of syntactic structures at p<0.05. Conclusions: As the target test items included generated the information and the syntactic structures expected, the test could be used as a quick syntactic screening tool with preschool children.

Keywords: Sinhala, screening, syntax, language

Procedia PDF Downloads 320
71 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media

Authors: Sharmila Parmanand

Abstract:

This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.

Keywords: discourse analysis, news media, sex work, trafficking

Procedia PDF Downloads 359
70 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 198
69 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 56
68 Augmented Reality to Support the Design of Innovative Agroforestry Systems

Authors: Laetitia Lemiere, Marie Gosme, Gerard Subsol, Marc Jaeger

Abstract:

Agroforestry is recognized as a way of developing sustainable and resilient agriculture that can fight against climate change. However, the number of species combinations, spatial configurations, and management options for trees and crops is vast. These choices must be adapted to the pedoclimatic and socio-economic contexts and to the objectives of the farmer, who therefore needs support in designing his system. Participative design workshops are a good way to integrate the knowledge of several experts in order to design such complex systems. The design of agroforestry systems should take into account both spatial aspects (e.g., spacing of trees within the lines and between lines, tree line orientation, tree-crop distance, species spatial patterns) and temporal aspects (e.g., crop rotations, tree thinning and pruning, tree planting in the case of successional agroforestry). Furthermore, the interactions between trees and crops evolve as the trees grow. However, agroforestry design workshops generally emphasize the spatial aspect only through the use of static tokens to represent the different species when designing the spatial configuration of the system. Augmented reality (AR) may overcome this limitation, allowing to visualize dynamic representations of trees and crops, and also their interactions, while at the same time retaining the possibility to physically interact with the system being designed (i.e., move trees, add or remove species, etc.). We propose an ergonomic digital solution capable of assisting a group of agroforestry experts to design an agroforestry system and to represent it. We investigated the use of web-based marker-based AR that does not require specific hardware and does not require specific installation so that all users could use their own smartphones right out of the pocket. We developed a prototype mobilizing the AR.js, ArToolKit.js, and Three.js open source libraries. In our implementation, we gradually build a virtual agroforestry system pattern scene from the users' interactions. A specific set of markers initialize the scene properties, and the various plant species are added and located during the workshop design session. The full virtual scene, including the trees positions with their neighborhood, are saved for further uses, such as virtual, augmented instantiation in the farmer fields. The number of tree species available in the application is gradually increasing; we mobilize 3D digital models for walnut, poplar, wild cherry, and other popular species used in agroforestry systems. The prototype allows shadow computations and the representation of trees at various growth stages, as well as different tree generations, and is thus able to visualize the dynamics of the system over time. Future work will focus on i) the design of complex patterns mobilizing several tree/shrub organizations, not restricted to lines; ii) the design of interfaces related to cultural practices, such as clearing or pruning; iii) the representation of tree-crop interactions. Beside tree shade (light competition), our objective is to represent also below-ground competitions (water, nitrogen) or other variables of interest for the design of agroforestry systems (e.g., predicted crop yield).

Keywords: agroforestry system design, augmented reality, marker-based AR, participative design, web-based AR

Procedia PDF Downloads 143
67 Social Media Usage in 'No Man's Land': A Populist Paradise

Authors: Nilufer Turksoy

Abstract:

Social media tools successfully connect people from different milieu to each other. This easy access allows politicians with populist attitude to circulate any kind of political opinion or message, which will hardly appear in conventional media. Populism is a relevant concept, especially, in political communication research. In the last decade, populism in social media has been researched extensively. The present study focuses on how social media is used as a playground by Turkish Cypriot politicians to perform populism in Northern Cyprus. It aims to determine and understand the relationship between politicians and social media, and how they employ social media in their political lives. Northern Cyprus’s multi-faced character provides politicians with many possible frames and topics they can make demagogy about ongoing political deadlock, international isolation, economic instability or social and cultural life in the north part of the island. Thus, Northern Cyprus, bizarrely branded as a 'no man's land', is a case par excellence to show how politically and economically unstable geographies are inclined to perform populism. Northern Cyprus is legally invalid territory recognized by no member of the international community other than Turkey and a phantom state, just like Abkhazia and South Ossetia or Nagorno-Karabakh. Five ideological key elements of populism are employed in the theoretical framework of this study: (1) highlighting the sovereignty of the people, (2) attacking the elites, (3) advocacy for the people, (4) excluding others, and (5) invoking the heartland. A qualitative text analysis of typical Facebook posts was conducted. Profiles of popular political leaders who occupy top positions (e.g. member of parliament, minister, chairman, party secretary), who have different political views, and who use their profiles for political expression on daily bases are selected. All official Facebook pages of the selected politicians are examined during a period of five months (1 September 2017-31 January 2018). This period is selected since it was prior to the parliamentary elections. Finding revealed that majority of the Turkish Cypriot politicians use their social media profile to propagate their political ideology in a populist fashion. Populist statements are found across parties. Facebook give especially the left-wing political actors the freedom to spread their messages in manipulative ways, mostly by using a satiric, ironic and slandering jargon that refers to the pseudo-state, the phantom state, the unrecognized Turkish Republic of Northern Cyprus state. While most of the political leaders advocate for the people, invoking the heartland are preferred by right-wing politicians. A broad range of left-wing politicians predominantly conducted attack on the economic elites and ostracism of others. The finding concluded that different politicians use social media differently according to their political standpoint. Overall, the study offers a thorough analysis of populism on social media. Considering the large role social media plays in the daily life today, the finding will shed some light on the political influence of social media and the social media usage among politicians.

Keywords: Northern Cyprus, populism, politics, qualitative text analysis, social media

Procedia PDF Downloads 111
66 Longitudinal impact on Empowerment for Ugandan Women with Post-Primary Education

Authors: Shelley Jones

Abstract:

Assumptions abound that education for girls will, as a matter of course, lead to their economic empowerment as women; yet. little is known about the ways in which schooling for girls, who traditionally/historically would not have had opportunities for post-primary, or perhaps even primary education – such as the participants in this study based in rural Uganda - in reality, impacts their economic situations. There is a need forlongitudinal studies in which women share experiences, understandings, and reflections of their lives that can inform our knowledge of this. In response, this paper reports on stage four of a longitudinal case study (2004-2018) focused on education and empowerment for girls and women in rural Uganda, in which 13 of the 15 participants from the original study participated. This paper understands empowerment as not simply increased opportunities (e.g., employment) but also real gains in power, freedoms that enable agentive action, and authentic and viable choices/alternatives that offer ‘exit options’ from unsatisfactory situations. As with the other stages, this study used a critical, postmodernist, global feminist ethnographic methodology, multimodal and qualitative data collection. Participants participated in interviews, focus group discussions, and a two-day workshop, which explored their understandings of how/if they understood post-primary education to have contributed to their economic empowerment. A constructivist grounded theory approach was used for data analysis to capture major themes. Findings indicate that although all participants believe that post-primary education provided them with economic opportunities they would not have had otherwise, the parameters of their economic empowerment were severely constrained by historic and extant sociocultural, economic, political, and institutional structures that continue to disempower girls and women, as well as additional financial responsibilities that they assumed to support others. Even though the participants had post-primary education, and they were able to obtain employment or operate their own businesses that they would not likely have been able to do without post-primary education, the majority of the participants’ incomes were not sufficient to elevate them financially above the extreme poverty level, especially as many were single mothers and the sole income earners in their households. Furthermore, most deemed their working conditions unsatisfactory and their positions precarious; they also experienced sexual harassment and abuse in the labour force. Additionally, employment for the participants resulted in a double work burden: long days at work, surrounded by many hours of domestic work at home (which, even if they had spousal partners, still fell almost exclusively to women). In conclusion, although the participants seem to have experienced some increase in economic empowerment, largely due to skills, knowledge, and qualifications gained at the post-primary level, numerous barriers prevented them from maximizing their capabilities and making significant gains in empowerment. There is need, in addition to providing education (primary, secondary, and tertiary) to girls, to address systemic gender inequalities that mitigate against women’s empowerment, as well as opportunities and freedom for women to come together and demand fair pay, reasonable working conditions, and benefits, freedom from gender-based harassment and assault in the workplace, as well as advocate for equal distribution of domestic work as a cultural change.

Keywords: girls' post-primary education, women's empowerment, uganda, employment

Procedia PDF Downloads 121
65 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 61
64 Academic Mobility within EU as a Voluntary or a Necessary Move: The Case of German Academics in the UK

Authors: Elena Samarsky

Abstract:

According to German national records and willingness to migrate surveys, emigration is much more attractive for better educated citizens employed in white-collar positions, with academics displaying the highest migration rate. The case study of academic migration from Germany is furthermore intriguing due to the country's financial power, competitive labour market and relatively good life-standards, working conditions and high wage rates. Investigation of such mobility challenges traditional economic view on migration, as it raises the question of why people chose to leave their highly-industrialized countries known for their high life-standards, stable political scene and prosperous economy. Within the regional domain, examining mobility of Germans contributes to the ongoing debate over the extent of influence of the EU mobility principle on migration decision. The latter is of particular interest, as it may shed the light on the extent to which it frames individual migration path, defines motivations and colours the experiences of migration action itself. The paper is based on the analysis of the migration decisions obtained through in-depth interviews with German academics employed in the UK. These retrospective interviews were conducted with German academies across selected universities in the UK, employed in a variety of academic fields, and different career stages. Interviews provide a detailed description of what motivated people to search for a post in another country, which attributes of such job are needed to be satisfied in order to facilitate migration, as well as general information on particularities of an academic career and institutions involved. In the course of the project, it became evident that although securing financial stability was non-negotiable factor in migration (e.g., work contract singed before relocation) non-pecuniary motivations played significant role as well. Migration narratives of this group - the highly skilled, whose human capital is transferable, and whose expertise is positively evaluated by countries, is mainly characterised by search for personal development and career advancement, rather than a direct increase in their income. Such records are also consistent in showing that in case of academics, scientific freedom and independence are the main attributes of a perfect job and are a substantial motivator. On the micro level, migration is rather depicted as an opportunistic action addressed in terms of voluntary and rather imposed decision. However, on the macro level, findings allow suggesting that such opportunities are rather an outcome embedded in the peculiarities of academia and its historical and structural developments. This, in turn, contributes significantly to emergence of a scene in which migration action takes place. The paper suggest further comparative research on the intersection of the macro and micro level, and in particular how both national academic institutions and the EU mobility principle shape migration of academics. In light of continuous attempts to make the European labour market more mobile and attractive such findings ought to have direct implications on policy.

Keywords: migration, EU, academics, highly skilled labour

Procedia PDF Downloads 228
63 Evaluation of Airborne Particulate Matter Early Biological Effects in Children with Micronucleus Cytome Assay: The MAPEC_LIFE Project

Authors: E. Carraro, Sa. Bonetta, Si. Bonetta, E. Ceretti, G. C. V. Viola, C. Pignata, S. Levorato, T. Salvatori, S. Vannini, V. Romanazzi, A. Carducci, G. Donzelli, T. Schilirò, A. De Donno, T. Grassi, S. Bonizzoni, A. Bonetti, G. Gilli, U. Gelatti

Abstract:

In 2013, air pollution and particulate matter were classified as carcinogenic to human by the IARC. At present, PM is Europe's most problematic pollutant in terms of harm to health, as reported by European Environmental Agency (EEA) in the EEA Technical Report on Air quality in Europe, 2015. A percentage between 17-30 of the EU urban population lives in areas where the EU air quality 24-hour limit value for PM10 is exceeded. Many studies have found a consistent association between exposure to PM and the incidence and mortality for some chronic diseases (i.e. lung cancer, cardiovascular diseases). Among the mechanisms responsible for these adverse effects, genotoxic damage is of particular concern. Children are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme (LIFE12 ENV/IT/000614) which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. This work is focused on the micronuclei frequency in child buccal cells in association with airborne PM levels taking into account the influence of other factors associated with the lifestyle of children. The micronucleus test was performed in exfoliated buccal cells of 6–8 years old children from 5 Italian towns with different air pollution levels. Data on air quality during the study period were obtained from the Regional Agency for Environmental Protection. A questionnaire administered to children’s parents was used to obtain details on family socio-economic status, children health condition, exposures to other indoor and outdoor pollutants (i.e. passive smoke) and life-style, with particular reference to eating habits. During the first sampling campaign (winter 2014-15) 1315 children were recruited and sampled for Micronuclei test in buccal cells. In the sampling period the levels of the main pollutants and PM10 were, as expected, higher in the North of Italy (PM10 mean values 62 μg/m3 in Torino and 40 μg/m3 in Brescia) than in the other towns (Pisa, Perugia, Lecce). A higher Micronucleus frequency in buccal cells of children was found in Brescia (0.6/1000 cells) than in the other towns (range 0.3-0.5/1000 cells). The statistical analysis underlines a relation of the micronuclei frequency with PM concentrations, traffic level near child residence, and level of education of parents. The results suggest that, in addition to air pollution exposure, some other factors, related to lifestyle or further exposures, may influence micronucleus frequency and cellular response to air pollutants.

Keywords: air pollution, buccal cells, children, micronucleus cytome assay

Procedia PDF Downloads 229
62 Seismic History and Liquefaction Resistance: A Comparative Study of Sites in California

Authors: Tarek Abdoun, Waleed Elsekelly

Abstract:

Introduction: Liquefaction of soils during earthquakes can have significant consequences on the stability of structures and infrastructure. This study focuses on comparing two liquefaction case histories in California, namely the response of the Wildlife site in the Imperial Valley to the 2010 El-Mayor Cucapah earthquake (Mw = 7.2, amax = 0.15g) and the response of the Treasure Island Fire Station (F.S.) site in the San Francisco Bay area to the 1989 Loma Prieta Earthquake (Mw = 6.9, amax = 0.16g). Both case histories involve liquefiable layers of silty sand with non-plastic fines, similar shear wave velocities, low CPT cone penetration resistances, and groundwater tables at similar depths. The liquefaction charts based on shear wave velocity field predict liquefaction at both sites. However, a significant difference arises in their pore pressure responses during the earthquakes. The Wildlife site did not experience liquefaction, as evidenced by piezometer data, while the Treasure Island F.S. site did liquefy during the shaking. Objective: The primary objective of this study is to investigate and understand the reason for the contrasting pore pressure responses observed at the Wildlife site and the Treasure Island F.S. site despite their similar geological characteristics and predicted liquefaction potential. By conducting a detailed analysis of similarities and differences between the two case histories, the objective is to identify the factors that contributed to the higher liquefaction resistance exhibited by the Wildlife site. Methodology: To achieve this objective, the geological and seismic data available for both sites were gathered and analyzed. Then their soil profiles, seismic characteristics, and liquefaction potential as predicted by shear wave velocity-based liquefaction charts were analyzed. Furthermore, the seismic histories of both regions were examined. The number of previous earthquakes capable of generating significant excess pore pressures for each critical layer was assessed. This analysis involved estimating the total seismic activity that the Wildlife and Treasure Island F.S. critical layers experienced over time. In addition to historical data, centrifuge and large-scale experiments were conducted to explore the impact of prior seismic activity on liquefaction resistance. These findings served as supporting evidence for the investigation. Conclusions: The higher liquefaction resistance observed at the Wildlife site and other sites in the Imperial Valley can be attributed to preshaking by previous earthquakes. The Wildlife critical layer was subjected to a substantially greater number of seismic events capable of generating significant excess pore pressures over time compared to the Treasure Island F.S. layer. This crucial disparity arises from the difference in seismic activity between the two regions in the past century. In conclusion, this research sheds light on the complex interplay between geological characteristics, seismic history, and liquefaction behavior. It emphasizes the significant impact of past seismic activity on liquefaction resistance and can provide valuable insights for evaluating the stability of sandy sites in other seismic regions.

Keywords: liquefaction, case histories, centrifuge, preshaking

Procedia PDF Downloads 50
61 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 174
60 The Relevance of (Re)Designing Professional Paths with Unemployed Working-Age Adults

Authors: Ana Rodrigues, Maria Cadilhe, Filipa Ferreira, Claudia Pereira, Marta Santos

Abstract:

Professional paths must be understood in the multiplicity of their possible configurations. While some actors tend to represent their path as a harmonious succession of positions in the life cycle, most recognize the existence of unforeseen and uncontrollable bifurcations, caused, for example, by a work accident or by going through a period of unemployment. Considering the intensified challenges posed by the ongoing societal changes (e.g., technological and demographic), and looking at the Portuguese context, where the unemployment rate continues to be more evident in certain age groups, like in individuals aged 45 years or over, it is essential to support those adults by providing strategies capable of supporting them during professional transitions, being this a joint responsibility of governments, employers, workers, educational institutions, among others. Concerned about those issues, Porto City Council launched the challenge of designing and implementing a Lifelong Career Guidance program, which was answered with the presentation of a customized conceptual and operational model: groWing|Lifelong Career Guidance. A pilot project targeting working-age adults (35 or older) who were unemployed was carried out, aiming to support them to reconstruct their professional paths, through the recovery of their past experiences and through a reflection about dimensions such as skills, interests, constraints, and labor market. A research action approach was used to assess the proposed model, namely the perceived relevance of the theme and of the project, by adults themselves (N=44), employment professionals (N=15) and local companies (N=15), in an integrated manner. A set of activities were carried out: a train the trainer course and a monitoring session with employment professionals; collective and individual sessions with adults, including a monitoring session as well; and a workshop with local companies. Support materials for individual/collective reflection about professional paths were created and adjusted for each involved agent. An evaluation model was co-build by different stakeholders. Assessment was carried through a form created for the purpose, completed at the end of the different activities, which allowed us to collect quantitative and qualitative data. Statistical analysis was carried through SPSS software. Results showed that the participants, as well as the employment professionals and the companies involved, considered both the topic and the project as extremely relevant. Also, adults saw the project as an opportunity to reflect on their paths and become aware of the opportunities and the necessary conditions to achieve their goals; the professionals highlighted the support given by an integrated methodology and the existence of tools to assist the process; companies valued the opportunity to think about the topic and the possible initiatives they could implement within the company to diversify their recruitment pool. The results allow us to conclude that, in the local context under study, there is an alignment between different agents regarding the pertinence of supporting adults with work experience in professional transitions, seeing the project as a relevant strategy to address this issue, which justifies that it can be extended in time and to other working-age adults in the future.

Keywords: professional paths, research action, turning points, lifelong career guidance, relevance

Procedia PDF Downloads 67
59 Phenolic Acids of Plant Origin as Promising Compounds for Elaboration of Antiviral Drugs against Influenza

Authors: Vladimir Berezin, Aizhan Turmagambetova, Andrey Bogoyavlenskiy, Pavel Alexyuk, Madina Alexyuk, Irina Zaitceva, Nadezhda Sokolova

Abstract:

Introduction: Influenza viruses could infect approximately 5% to 10% of the global human population annually, resulting in serious social and economic damage. Vaccination and etiotropic antiviral drugs are used for the prevention and treatment of influenza. Vaccination is important; however, antiviral drugs represent the second line of defense against new emerging influenza virus strains for which vaccines may be unsuccessful. However, the significant drawback of commercial synthetic anti-flu drugs is the appearance of drug-resistant influenza virus strains. Therefore, the search and development of new anti-flu drugs efficient against drug-resistant strains is an important medical problem for today. The aim of this work was a study of four phenolic acids of plant origin (Gallic, Syringic, Vanillic, and Protocatechuic acids) as a possible tool for treatment against influenza virus. Methods: Phenolic acids; gallic, syringic, vanillic, and protocatechuic have been prepared by extraction from plant tissues and purified using high-performance liquid chromatography fractionation. Avian influenza virus, strain A/Tern/South Africa/1/1961 (H5N3) and human epidemic influenza virus, strain A/Almaty/8/98 (H3N2) resistant to commercial anti-flu drugs (Rimantadine, Oseltamivir) were used for testing antiviral activity. Viruses were grown in the allantoic cavity of 10 days old chicken embryos. The chemotherapeutic index (CTI), determined as the ratio of an average toxic concentration of the tested compound (TC₅₀) to the average effective virus-inhibition concentration (EC₅₀), has been used as a criteria of specific antiviral action. Results: The results of study have shown that the structure of phenolic acids significantly affected their ability to suppress the reproduction of tested influenza virus strains. The highest antiviral activity among tested phenolic acids was detected for gallic acid, which contains three hydroxyl groups in the molecule at C3, C4, and C5 positions. Antiviral activity of gallic acid against A/H5N3 and A/H3N2 influenza virus strains was higher than antiviral activity of Oseltamivir and Rimantadine. gallic acid inhibited almost 100% of the infection activity of both tested viruses. Protocatechuic acid, which possesses 2 hydroxyl groups (C3 and C4) have shown weaker antiviral activity in comparison with gallic acid and inhibited less than 10% of virus infection activity. Syringic acid, which contains two hydroxyl groups (C3 and C5), was able to suppress up to 12% of infection activity. Substitution of two hydroxyl groups by methoxy groups resulted in the complete loss of antiviral activity. Vanillic acid, which is different from protocatechuic acid by replacing of C3 hydroxyl group to methoxy group, was able to suppress about 30% of infection activity of tested influenza viruses. Conclusion: For pronounced antiviral activity, the molecular of phenolic acid must have at least two hydroxyl groups. Replacement of hydroxyl groups to methoxy group leads to a reduction of antiviral properties. Gallic acid demonstrated high antiviral activity against influenza viruses, including Rimantadine and Oseltamivir resistant strains, and could be used as a potential candidate for the development of antiviral drug against influenza virus.

Keywords: antiviral activity, influenza virus, drug resistance, phenolic acids

Procedia PDF Downloads 110
58 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases

Authors: Sandra Hartl, Meena Chavan

Abstract:

This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.

Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution

Procedia PDF Downloads 125
57 Chemical and Biomolecular Detection at a Polarizable Electrical Interface

Authors: Nicholas Mavrogiannis, Francesca Crivellari, Zachary Gagnon

Abstract:

Development of low-cost, rapid, sensitive and portable biosensing systems are important for the detection and prevention of disease in developing countries, biowarfare/antiterrorism applications, environmental monitoring, point-of-care diagnostic testing and for basic biological research. Currently, the most established commercially available and widespread assays for portable point of care detection and disease testing are paper-based dipstick and lateral flow test strips. These paper-based devices are often small, cheap and simple to operate. The last three decades in particular have seen an emergence in these assays in diagnostic settings for detection of pregnancy, HIV/AIDS, blood glucose, Influenza, urinary protein, cardiovascular disease, respiratory infections and blood chemistries. Such assays are widely available largely because they are inexpensive, lightweight, and portable, are simple to operate, and a few platforms are capable of multiplexed detection for a small number of sample targets. However, there is a critical need for sensitive, quantitative and multiplexed detection capabilities for point-of-care diagnostics and for the detection and prevention of disease in the developing world that cannot be satisfied by current state-of-the-art paper-based assays. For example, applications including the detection of cardiac and cancer biomarkers and biothreat applications require sensitive multiplexed detection of analytes in the nM and pM range, and cannot currently be satisfied with current inexpensive portable platforms due to their lack of sensitivity, quantitative capabilities and often unreliable performance. In this talk, inexpensive label-free biomolecular detection at liquid interfaces using a newly discovered electrokinetic phenomenon known as fluidic dielectrophoresis (fDEP) is demonstrated. The electrokinetic approach involves exploiting the electrical mismatches between two aqueous liquid streams forced to flow side-by-side in a microfluidic T-channel. In this system, one fluid stream is engineered to have a higher conductivity relative to its neighbor which has a higher permittivity. When a “low” frequency (< 1 MHz) alternating current (AC) electrical field is applied normal to this fluidic electrical interface the fluid stream with high conductivity displaces into the low conductive stream. Conversely, when a “high” frequency (20MHz) AC electric field is applied, the high permittivity stream deflects across the microfluidic channel. There is, however, a critical frequency sensitive to the electrical differences between each fluid phase – the fDEP crossover frequency – between these two events where no fluid deflection is observed, and the interface remains fixed when exposed to an external field. To perform biomolecular detection, two streams flow side-by-side in a microfluidic T-channel: one fluid stream with an analyte of choice and an adjacent stream with a specific receptor to the chosen target. The two fluid streams merge and the fDEP crossover frequency is measured at different axial positions down the resulting liquid

Keywords: biodetection, fluidic dielectrophoresis, interfacial polarization, liquid interface

Procedia PDF Downloads 424
56 Bacterial Exposure and Microbial Activity in Dental Clinics during Cleaning Procedures

Authors: Atin Adhikari, Sushma Kurella, Pratik Banerjee, Nabanita Mukherjee, Yamini M. Chandana Gollapudi, Bushra Shah

Abstract:

Different sharp instruments, drilling machines, and high speed rotary instruments are routinely used in dental clinics during dental cleaning. Therefore, these cleaning procedures release a lot of oral microorganisms including bacteria in clinic air and may cause significant occupational bioaerosol exposure risks for dentists, dental hygienists, patients, and dental clinic employees. Two major goals of this study were to quantify volumetric airborne concentrations of bacteria and to assess overall microbial activity in this type of occupational environment. The study was conducted in several dental clinics of southern Georgia and 15 dental cleaning procedures were targeted for sampling of airborne bacteria and testing of overall microbial activity in settled dusts over clinic floors. For air sampling, a Biostage viable cascade impactor was utilized, which comprises an inlet cone, precision-drilled 400-hole impactor stage, and a base that holds an agar plate (Tryptic soy agar). A high-flow Quick-Take-30 pump connected to this impactor pulls microorganisms in air at 28.3 L/min flow rate through the holes (jets) where they are collected on the agar surface for approx. five minutes. After sampling, agar plates containing the samples were placed in an ice chest with blue ice and plates were incubated at 30±2°C for 24 to 72 h. Colonies were counted and converted to airborne concentrations (CFU/m3) followed by positive hole corrections. Most abundant bacterial colonies (selected by visual screening) were identified by PCR amplicon sequencing of 16S rRNA genes. For understanding overall microbial activity in clinic floors and estimating a general cleanliness of the clinic surfaces during or after dental cleaning procedures, ATP levels were determined in swabbed dust samples collected from 10 cm2 floor surfaces. Concentration of ATP may indicate both the cell viability and the metabolic status of settled microorganisms in this situation. An ATP measuring kit was used, which utilized standard luciferin-luciferase fluorescence reaction and a luminometer, which quantified ATP levels as relative light units (RLU). Three air and dust samples were collected during each cleaning procedure (at the beginning, during cleaning, and immediately after the procedure was completed (n = 45). Concentrations at the beginning, during, and after dental cleaning procedures were 671±525, 917±1203, and 899±823 CFU/m3, respectively for airborne bacteria and 91±101, 243±129, and 139±77 RLU/sample, respectively for ATP levels. The concentrations of bacteria were significantly higher than typical indoor residential environments. Although an increasing trend for airborne bacteria was observed during cleaning, the data collected at three different time points were not significantly different (ANOVA: p = 0.38) probably due to high standard deviations of data. The ATP levels, however, demonstrated a significant difference (ANOVA: p <0.05) in this scenario indicating significant change in microbial activity on floor surfaces during dental cleaning. The most common bacterial genera identified were: Neisseria sp., Streptococcus sp., Chryseobacterium sp., Paenisporosarcina sp., and Vibrio sp. in terms of frequencies of occurrences, respectively. The study concluded that bacterial exposure in dental clinics could be a notable occupational biohazard, and appropriate respiratory protections for the employees are urgently needed.

Keywords: bioaerosols, hospital hygiene, indoor air quality, occupational biohazards

Procedia PDF Downloads 291
55 Higher Education in India Strength, Weakness, Opportunities and Threats

Authors: Renu Satish Nair

Abstract:

Indian higher education system is the third largest in the world next to United States and China. India is experiencing a rapid growth in higher education in terms of student enrollment as well as establishment of new universities, colleges and institutes of national importance. Presently about 22 million students are being enrolled in higher education and more than 46 thousand institutions’ are functioning as centers of higher education. Indian government plays a 'command and control' role in higher education. The main governing body is University Grants Commission, which enforces its standards, advises the government, and helps coordinate between the centre and the state. Accreditation of higher learning is over seen by 12 autonomous institutions established by the University Grants Commission. The present paper is an effort to analyze the strength, weakness, opportunities and threat (SWOT Analysis) of Indian Higher education system. The higher education in India is progressing ahead by virtue of its strength which is being recognized at global level. Several institutions of India, such as Indian Institutes of Technology (IITs), Indian Institutes of Management (IIMs) and National Institutes of Technology (NITs) have been globally acclaimed for their standard of education. Three Indian universities were listed in the Times Higher Education list of the world’s top 200 universities i.e. Indian Institutes of Technology, Indian Institute of Management and Jawahar Lal Nehru University in 2005 and 2006. Six Indian Institutes of Technology and the Birla Institute of Technology and Science - Pilani were listed among the top 20 science and technology schools in Asia by the Asia Week. The school of Business situated in Hyderabad was ranked number 12 in Globe MBA ranking by the Financial Times of London in 2010 while the All India Institute of Medical Sciences has been recognized as a global leader in medical research and treatment. But at the same time, because of vast expansion, the system bears several weaknesses. The Indian higher education system in many parts of the country is in the state of disrepair. In almost half the districts in the country higher education enrollment are very low. Almost two third of total universities and 90% of colleges are rated below average on quality parameters. This can be attributed to the under prepared faculty, unwieldy governance and other obstacles to innovation and improvement that could prohibit India from meeting its national education goals. The opportunities in Indian higher education system are widely ranged. The national institutions are training their products to compete at global level and make them capable to grab opportunities worldwide. The state universities and colleges with their limited resources are giving the products that are capable enough to secure career opportunities and hold responsible positions in various government and private sectors with in the country. This is further creating opportunities for the weaker section of the society to join the main stream. There are several factors which can be defined as threats to Indian higher education system. It is a matter of great concern and needs proper attention. Some important factors are -Conservative society, particularly for women education; -Lack of transparency, -Taking higher education as a means of business

Keywords: Indian higher education system, SWOT analysis, university grants commission, Indian institutes of technology

Procedia PDF Downloads 860
54 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 56
53 Momentum Profits and Investor Behavior

Authors: Aditya Sharma

Abstract:

Profits earned from relative strength strategy of zero-cost portfolio i.e. taking long position in winner stocks and short position in loser stocks from recent past are termed as momentum profits. In recent times, there has been lot of controversy and concern about sources of momentum profits, since the existence of these profits acts as an evidence of earning non-normal returns from publicly available information directly contradicting Efficient Market Hypothesis. Literature review reveals conflicting theories and differing evidences on sources of momentum profits. This paper aims at re-examining the sources of momentum profits in Indian capital markets. The study focuses on assessing the effect of fundamental as well as behavioral sources in order to understand the role of investor behavior in stock returns and suggest (if any) improvements to existing behavioral asset pricing models. This Paper adopts calendar time methodology to calculate momentum profits for 6 different strategies with and without skipping a month between ranking and holding period. For each J/K strategy, under this methodology, at the beginning of each month t stocks are ranked on past j month’s average returns and sorted in descending order. Stocks in upper decile are termed winners and bottom decile as losers. After ranking long and short positions are taken in winner and loser stocks respectively and both portfolios are held for next k months, in such manner that at any given point of time we have K overlapping long and short portfolios each, ranked from t-1 month to t-K month. At the end of period, returns of both long and short portfolios are calculated by taking equally weighted average across all months. Long minus short returns (LMS) are momentum profits for each strategy. Post testing for momentum profits, to study the role market risk plays in momentum profits, CAPM and Fama French three factor model adjusted LMS returns are calculated. In the final phase of studying sources, decomposing methodology has been used for breaking up the profits into unconditional means, serial correlations, and cross-serial correlations. This methodology is unbiased, can be used with the decile-based methodology and helps to test the effect of behavioral and fundamental sources altogether. From all the analysis, it was found that momentum profits do exist in Indian capital markets with market risk playing little role in defining them. Also, it was observed that though momentum profits have multiple sources (risk, serial correlations, and cross-serial correlations), cross-serial correlations plays a major role in defining these profits. The study revealed that momentum profits do have multiple sources however, cross-serial correlations i.e. the effect of returns of other stocks play a major role. This means that in addition to studying the investors` reactions to the information of the same firm it is also important to study how they react to the information of other firms. The analysis confirms that investor behavior does play an important role in stock returns and incorporating both the aspects of investors’ reactions in behavioral asset pricing models help make then better.

Keywords: investor behavior, momentum effect, sources of momentum, stock returns

Procedia PDF Downloads 282
52 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study

Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin

Abstract:

Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.

Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training

Procedia PDF Downloads 318
51 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 43
50 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 184
49 Rethinking the Languages for Specific Purposes Syllabus in the 21st Century: Topic-Centered or Skills-Centered

Authors: A. Knezović

Abstract:

21st century has transformed the labor market landscape in a way of posing new and different demands on university graduates as well as university lecturers, which means that the knowledge and academic skills students acquire in the course of their studies should be applicable and transferable from the higher education context to their future professional careers. Given the context of the Languages for Specific Purposes (LSP) classroom, the teachers’ objective is not only to teach the language itself, but also to prepare students to use that language as a medium to develop generic skills and competences. These include media and information literacy, critical and creative thinking, problem-solving and analytical skills, effective written and oral communication, as well as collaborative work and social skills, all of which are necessary to make university graduates more competitive in everyday professional environments. On the other hand, due to limitations of time and large numbers of students in classes, the frequently topic-centered syllabus of LSP courses places considerable focus on acquiring the subject matter and specialist vocabulary instead of sufficient development of skills and competences required by students’ prospective employers. This paper intends to explore some of those issues as viewed both by LSP lecturers and by business professionals in their respective surveys. The surveys were conducted among more than 50 LSP lecturers at higher education institutions in Croatia, more than 40 HR professionals and more than 60 university graduates with degrees in economics and/or business working in management positions in mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into consideration in this research, including reading and listening comprehension of specialist texts, acquisition of specialist vocabulary and grammatical structures, as well as presentation and negotiation skills. The ability to hold meetings, conduct business correspondence, write reports, academic texts, case studies and take part in debates were also taken into consideration, as well as informal business communication, business etiquette and core courses delivered in a foreign language. The results of the surveys conducted among LSP lecturers will be analyzed with reference to what extent those elements are included in their courses and how consistently and thoroughly they are evaluated according to their course requirements. Their opinions will be compared to the results of the surveys conducted among professionals from a range of industries in Croatia so as to examine how useful and important they perceive the same elements of the LSP course content in their working environments. Such comparative analysis will thus show to what extent the syllabi of LSP courses meet the demands of the employment market when it comes to the students’ language skills and competences, as well as transferable skills. Finally, the findings will also be compared to the observations based on practical teaching experience and the relevant sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely open-ended questions that do not have conclusive answers, but might prompt LSP lecturers to re-evaluate the content and objectives of their course syllabi.

Keywords: languages for specific purposes (LSP), language skills, topic-centred syllabus, transferable skills

Procedia PDF Downloads 286
48 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 123
47 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 244
46 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study

Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu

Abstract:

Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.

Keywords: dysphagia, teleassessment, challenges, Indian SLP

Procedia PDF Downloads 104
45 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 249