Search results for: Lagos State
1031 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University
Authors: Esther Pearl Palmer
Abstract:
Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.
Procedia PDF Downloads 921030 The Applicability of General Catholic Canon Law during the Ongoing Migration Crisis in Hungary
Authors: Lorand Ujhazi
Abstract:
The vast majority of existing canonical studies about migration are focused on examining the general pastoral and legal regulations of the Catholic Church. The weakness of this approach is that it ignores a number of important factors; like the financial, legal and personal circumstances of a particular church or the canonical position of certain organizations which actually look after the immigrants. This paper is a case study, which analyses the current and historical migration related policies and activities of the Catholic Church in Hungary. To achieve this goal the study uses canon law, historical publications, various instructions and communications issued by church superiors, Hungarian and foreign media reports and the relevant Hungarian legislation. The paper first examines how the Hungarian Catholic Church assisted migrants like Armenians fleeing from the Ottoman Empire, Poles escaping during the Second World War, East German and Romanian citizens in the 1980s and refugees from the former Yugoslavia in the 1990s. These events underline the importance of past historical experience in the development of contemporary pastoral and humanitarian policy of the Catholic Church in Hungary. Then the paper turns to the events of the ongoing crisis by describing the unique challenges faced by churches in transit countries like Hungary. Then the research contrasts these findings with the typical responsibilities of churches in countries which are popular destinations for immigrants. The next part of the case study focuses on the changes to the pre-crisis legal and canonical framework which influenced the actions of hierarchical and charity organizations in Hungary. Afterwards, the paper illustrates the dangers of operating in an unclear legal environment, where some charitable activities of the church like a fundraising campaign may be interpreted as a national security risk by state authorities. Then the paper presents the reactions of Hungarian academics to the current migration crisis and finally it offers some proposals how to improve parts of Canon Law which govern immigration. The conclusion of the paper is that during the formulation of the central refugee policy of the Catholic Church decision makers must take into consideration the peculiar circumstances of its particular churches. This approach may prevent disharmony between the existing central regulations, the policy of the Vatican and the operations of the local church organizations.Keywords: canon law, Catholic Church, civil law, Hungary, immigration, national security
Procedia PDF Downloads 3081029 Urban Accessibility of Historical Cities: The Venetian Case Study
Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera
Abstract:
The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.Keywords: accessibility of historical cities, historical heritage preservation, inclusive design, technological and social innovation
Procedia PDF Downloads 2811028 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 2121027 Copyright Infringement for Academic Authorship in Uganda: Implications on Exemptions of Fair Use for Educational Purposes in Universities
Authors: Elisam Magara
Abstract:
Like any other property, Intellectual Property (IP) must be regarded, respected, and remunerated to address the historical, ethical, economical and informational needs of society. Article 26 of the Constitution of the Republic of Uganda 1995, the Copyright and Neighbouring Rights (CNR) Act 2006 and CNR Regulations 2010 guide copyright protection in Uganda. However, an unpredictable environment has negatively impact on certain author/intellectual freedoms; and the infringements on academic works that affect the economic rights of authors that limit authors from fully enjoying the benefits of authorship. Notwithstanding the different licensing systems and copyright protection avenues, educational institutions and custodians of copyright works (libraries, archives) have continued to advocate for open access to information resources, under the legal exceptions of fair use for educational purposes. Thus, a study was conducted in educational institutions, libraries and archives in Uganda to assess the state of copyright infringement in Uganda in an increased use of academic authored works. The study attempted to establish the nature and forms of Copyright Infringement, the circumstances for copyright infringement, assessed the opinions from the custodians on strategies for balancing copyright protection for economic and moral gains by authors and increased access to information for educational purposes and fair-use. Through a survey, using a self-administered questionnaire, interviews and physical visits, the study was conducted in higher education institutions, libraries and archives among the officers that manage and keep copyright works. It established that the uncontrolled reproduction of copyright works in educational institutions and information institutions, have contributed copyright infringement robbing authors of their potential economic earnings and limiting their academic innovativeness and creativity. The study also established that lack of consciousness and awareness on copyright issues by lecturers, universities and libraries has made copyright works in Universities highly susceptible to copyright infringement. Thus the increased access to materials without restrictions has resulted in copyright infringement among the educational institutions, libraries and archives. A strategic alliance by the collecting Society (Uganda Reproduction Rights Organisation (URRO), government, Universities and right holders organisations (UTANA) to work together and institute a programme to address copyright protection and access to information is pertinently required.Keywords: access to information, academic Writing, copyright, copyright infringement, copyright protection, exemptions of fair use, intellectual property rights
Procedia PDF Downloads 4521026 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 1201025 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 1271024 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 1521023 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 2271022 A Systematic Map of the Research Trends in Wildfire Management in Mediterranean-Climate Regions
Authors: Renata Martins Pacheco, João Claro
Abstract:
Wildfires are becoming an increasing concern worldwide, causing substantial social, economic, and environmental disruptions. This situation is especially relevant in Mediterranean-climate regions, present in all the five continents of the world, in which fire is not only a natural component of the environment but also perhaps one of the most important evolutionary forces. The rise in wildfire occurrences and their associated impacts suggests the need for identifying knowledge gaps and enhancing the basis of scientific evidence on how managers and policymakers may act effectively to address them. Considering that the main goal of a systematic map is to collate and catalog a body of evidence to describe the state of knowledge for a specific topic, it is a suitable approach to be used for this purpose. In this context, the aim of this study is to systematically map the research trends in wildfire management practices in Mediterranean-climate regions. A total of 201 wildfire management studies were analyzed and systematically mapped in terms of their: Year of publication; Place of study; Scientific outlet; Research area (Web of Science) or Research field (Scopus); Wildfire phase; Central research topic; Main objective of the study; Research methods; and Main conclusions or contributions. The results indicate that there is an increasing number of studies being developed on the topic (most from the last 10 years), but more than half of them are conducted in few Mediterranean countries (60% of the analyzed studies were conducted in Spain, Portugal, Greece, Italy or France), and more than 50% are focused on pre-fire issues, such as prevention and fuel management. In contrast, only 12% of the studies focused on “Economic modeling” or “Human factors and issues,” which suggests that the triple bottom line of the sustainability argument (social, environmental, and economic) is not being fully addressed by fire management research. More than one-fourth of the studies had their objective related to testing new approaches in fire or forest management, suggesting that new knowledge is being produced on the field. Nevertheless, the results indicate that most studies (about 84%) employed quantitative research methods, and only 3% of the studies used research methods that tackled social issues or addressed expert and practitioner’s knowledge. Perhaps this lack of multidisciplinary studies is one of the factors hindering more progress from being made in terms of reducing wildfire occurrences and their impacts.Keywords: wildfire, Mediterranean-climate regions, management, policy
Procedia PDF Downloads 1241021 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 1311020 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices
Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen
Abstract:
Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.Keywords: tavi, tavr, fea, par, fem
Procedia PDF Downloads 4381019 Farmers’ Perception, Willingness and Capacity in Utilization of Household Sewage Sludge as Organic Resources for Peri-Urban Agriculture around Jos Nigeria
Authors: C. C. Alamanjo, A. O. Adepoju, H. Martin, R. N. Baines
Abstract:
Peri-urban agriculture in Jos Nigeria serves as a major means of livelihood for both urban and peri-urban poor, and constitutes huge commercial inclination with a target market that has spanned beyond Plateau State. Yet, the sustainability of this sector is threatened by intensive application of urban refuse ash contaminated with heavy metals, as a result of the highly heterogeneous materials used in ash production. Hence, this research aimed to understand the current fertilizer employed by farmers, their perception and acceptability in utilization of household sewage sludge for agricultural purposes and their capacity in mitigating risks associated with such practice. Mixed methods approach was adopted, and data collection tools used include survey questionnaire, focus group discussion with farmers, participants and field observation. The study identified that farmers maintain a complex mixture of organic and chemical fertilizers, with mixture composition that is dependent on fertilizer availability and affordability. Also, farmers have decreased the rate of utilization of urban refuse ash due to labor and increased logistic cost and are keen to utilize household sewage sludge for soil fertility improvement but are mainly constrained by accessibility of this waste product. Nevertheless, farmers near to sewage disposal points have commenced utilization of household sewage sludge for improving soil fertility. Farmers were knowledgeable on composting but find their strategic method of dewatering and sun drying more convenient. Irrigation farmers were not enthusiastic for treatment, as they desired both water and sludge. Secondly, household sewage sludge observed in the field is heterogeneous due to nearness between its disposal point and that of urban refuse, which raises concern for possible cross-contamination of pollutants and also portrays lack of extension guidance as regards to treatment and management of household sewage sludge for agricultural purposes. Hence, farmers concerns need to be addressed, particularly in providing extension advice and establishment of decentralized household sewage sludge collection centers, for continuous availability of liquid and concentrated sludge. Urgent need is also required for the Federal Government of Nigeria to increase commitment towards empowering her subsidiaries for efficient discharge of corporate responsibilities.Keywords: ash, farmers, household, peri-urban, refuse, sewage, sludge, urban
Procedia PDF Downloads 1351018 Hui as Religious over Ethnic Identity: A Case Study of Muslim Ethnic Interaction in Central Northwest China
Authors: Hugh Battye
Abstract:
In recent years, Muslim identity in China has strengthened against the backdrop of a worldwide Islamic revival. One discussion arising from this has been focused around the Hui, an ethnicity created by the Communist government in the 1950s covering the Chinese speaking 'Sino-Muslims' as opposed to those with their own language. While the term Hui in Chinese has traditionally meant 'Muslim', the strengthening of Hui identity in recent decades has led to a debate among scholars as to whether this identity is primarily ethnically or religiously driven. This article looks at the case of a mixed ethnic community in rural Gansu Province, Central Northwest China, which not only contains the official Hui ethnicity but also members of the smaller Muslim Salar and Bonan minority groups. In analyzing the close interaction between these groups, the paper will argue that, despite government attempts to promote the Hui as an ethnicity within its modern ethnic paradigm, in rural Gansu and the general region, Hui is still essentially seen as a religious identity. Having provided an overview of the historical evolution of the Hui ethnonym in China and presented the views of some of the important scholars involved in the discussion, the paper will then offer its findings based on participant observation and survey work in Gansu. The results will show that, firstly, for the local Muslims, religious identity clearly dominates ethnic identity. On the ground, the term Hui continues to be used as a catch-all term for Muslims, whether they belong to the official 'Hui' nationality or not, and against this backdrop, the ethnic importance of being 'Hui', 'Bonan' or 'Salar' within the Muslim community itself is by contrast minimal. Secondly, however, this local Muslim solidarity is not at present pointing towards some kind of national pan-ethnic Islamic movement that could potentially set itself up in opposition to the Chinese government; rather it is better seen as part of an ongoing negotiation by local Muslims with the state in the context of its ascribed ethnic categories. The findings of this study in a region where many of the Muslims are more conservative in their beliefs is not necessarily replicated in other contexts, such as in urban areas and in eastern and southern China, and hence reification of the term Hui as one idea extending all across China should be avoided, whether in terms of a united religious 'ummah' or of a real or imagined 'ethnic group.' Rather, this localized case study seeks to demonstrate ways in which Muslims of rural Central Northwest China are 'being Hui,' as a contribution to the broader discussion on what it means to be Muslim and Chinese in the reform era.Keywords: China, ethnicity, Hui, identity, Muslims
Procedia PDF Downloads 1261017 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1251016 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada
Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman
Abstract:
Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.Keywords: HAND, DTM, rapid floodplain, simplified conceptual models
Procedia PDF Downloads 1511015 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 2481014 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension
Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan
Abstract:
Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress
Procedia PDF Downloads 1081013 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator
Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac
Abstract:
With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy
Procedia PDF Downloads 2131012 Engineering Topology of Construction Ecology in Urban Environments: Suez Canal Economic Zone
Authors: Moustafa Osman Mohammed
Abstract:
Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). Construction ecology approach feedback energy from resources flows between biotic and abiotic in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.Keywords: construction ecology, industrial ecology, urban topology, environmental planning
Procedia PDF Downloads 1301011 Case Report on Sepsis by Alpha-Hemolytic Streptococcus and Mannheimia haemolytica in Neonate Dogs
Authors: Maria L. G. Lourenco, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, Joao C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado
Abstract:
Neonatal sepsis is a systemic response of acute infection by bacteria that may lead to high mortality in a litter. This study aims to report a case of sepsis by alpha-hemolytic Streptococcus and Mannheimia haemolytica in neonate dogs. A pregnant, mixed-breed bitch at approximately the 60th day of pregnancy was admitted to the Sao Paulo State University (UNESP) Veterinary Hospital, Botucatu, Sao Paulo, Brazil, and subjected to a c-section due to uterine atony and fetuses no heartbeats on the ultrasound examination. The mother presented leukopenia of 1.6 thousand leukocytes, and there was no other information regarding previous clinical history. Among the offspring, four were stillborn, and five were born alive. On clinical examination, neonates weighed between 312 and 384 grams. Reflexes were present, and the newborn's body temperature was between 89.9 ºF and 96.4 ºF. Neonates also presented clinical signs of neonatal infection: omphalitis, abdomen, and extremities with cyanotic color, hematuria, and diarrhea (meconium). Complementary tests revealed leukopenia. The presence of alpha hemolytic streptococcus and Mannheimia haemolytica was revealed in the bacterial culture. The bacteria were sensitive to cephalosporins and penicillin on the antibiogram. Treatment for sepsis was instituted with the drug ceftriaxone, at a dose of 50 mg per kilogram, administered intravenous (jugular vein). Subsequently administered subcutaneous, every 12 hours, for seven days. Heated fluid therapy was performed, with Ringer lactate, at a dose of 4 ml per 100 grams of weight, intravenous. Heating measures were instituted. Blood plasma was also administered, at a dose of 2 mL per 100 grams of weight, administered subcutaneous, as a source of passive immunity. A maternal milk substitute was instituted, and lactation was discontinued since the mother was unable to nurse due to the infection. The mother was neutered during the c-section and treated with ceftriaxone (50 mg/kg). After seven days, the newborns presented normal clinical signs and no alterations in the hemogram. Early diagnosis and intervention were essential for the survival of these patients.Keywords: neonatal infection, puppies, bacteria, newborn
Procedia PDF Downloads 1211010 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 1271009 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1351008 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello
Abstract:
Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.Keywords: facial action coding system, COVID-19, masks, facial analysis
Procedia PDF Downloads 781007 Evolution of Rock-Cut Caves of Dhamnar at Dhamnar, MP
Authors: Abhishek Ranka
Abstract:
Rock-cut Architecture is a manifestation of human endurance in constructing magnificent structures by sculpting and cutting entire hills. Cave Architecture in India form an important part of rock-cut development and is among the most prolific examples of rock-cut architecture in the world. There are more than 1500 rock-cut caves in various regions of India. Among them mostly are located in western India, more particularly in the state of Maharashtra. Some of the rock-cut caves are located in the central region of India, which is presently known as Malawa (Madhya Pradesh). The region is dominated by the vidhyachal hill ranges toward the west, dotted with the coarse laterite rock. Dhamnar Caves have been excavated in the central region of Mandsaur Dist. With a combination of shared sacred faiths. The earliest rock-cut activity began in the north, in Bihar, where caves were excavated in the Barabar and the Nagarjuni hills during the Mauryan period (3rd century BCE). The rock-cut activity then shifts to the central part of India in Madhya Pradesh, where the caves at Dhamnar, Bagh, Udayagiri, Poldungar, etc. excavated between 3rdto 9ᵗʰ CE. The rock-cut excavation continued to flourish in Madhya Pradesh till 10ᵗʰ century CE, simultaneously with monolithic Hindu temples. Dhamnar caves fall into four architectural typologies: the Lena caves, Chaitya caves, Viharas & Lena-Chaityagriha caves. The Buddhist rock-cutting activity in central India is divisible into two phases. In the first phase (2ndBCE-3rd CE), the Buddha image is conspicuously absent. After a lapse of about three centuries, activity begins again, and the Buddha images this time are carved. The former group belongs to the Hinayana (Lesser Vehicle) phase and the latter to the Mahayana (Greater Vehicle). Dhamnar caves has an elaborate facades, pillar capitals, and many more creative sculptures in various postures. These caves were excavated against the background of invigorating trade activities and varied socio-religious or Socio Cultural contexts. These caves also highlights the wealthy and varied patronage provided by the dynasties of the past. This paper speaks about the appraisal of the rock cut mechanisms, design strategies, and approaches while promoting a scope for further research in conservation practices. Rock-cut sites, with their physical setting and various functional spaces as a sustainable habitat for centuries, has a heritage footprint with a researchquotient.Keywords: rock-cut architecture, buddhism, hinduism, Iconography, and architectural typologies, Jainism
Procedia PDF Downloads 1531006 The Thoughts and Feelings Associated with Goal Achievement
Authors: Lindsay Foreman
Abstract:
Introduction: Goals have become synonymous with the quest for the good life and the pursuit of happiness, with coaching and positive psychology gaining popularity as an approach in recent decades. And yet mental health is on the rise and the leading cause of disability, wellbeing is on the decline, stress is leading to 50-60% of workday absences and the need for action is indisputable and urgent. Purpose: The purpose of this study is to better understand two things we cannot see, but that play the most significant role in these outcomes - what we think and how we feel. With many working on the assumption that positive thinking and an optimistic outlook are necessary or valuable components of goal pursuit, this study uncovers the reality of the ‘inner-game’ from the coachee's perspective. Method: With a mixed methods design using a Q Method study of subjectivity to ‘make the unseen seen’. First, a wide-ranging universe of subjective thoughts and feelings experienced during goal pursuit are explored. These are generated from literature and a Qualtrics survey to create a Q-Set of 40 statements. Then 19 participants in professional and organisational settings offer their perspectives on these 40 Q-Set statements. Each rank them in a semi-forced distribution from ‘most like me’ to ‘least like me’ using Q-Sort software. From these individual perspectives, clusters of perspectives are identified using factor analysis and four distinct viewpoints have emerged. Findings: These Goal Pursuit Viewpoints offer insight into the states and self-talk experienced by coachees and may not reflect the assumption of positive thinking associated with achieving goals. The four Viewpoints are 1) the Optimistic View, 2) the Realistic View 3) The Dreamer View and 4) The Conflicted View. With only a quarter of the Dreamer View, and a third of the Optimistic view going on to achieve their goals, these assumptions need review. And with all the Realistic Views going on to achieve their goals, the role of self-doubt, overwhelm and anxiousness in goal achievement cannot be overlooked. Contribution: This study offers greater insight and understanding of people's inner experiences as they pursue goals and highlights the necessary and normal negative states associated with goal achievement. It also offers a practical tool of the Q-set statements to help coaches and coachees explore the current state and help navigate the journey towards goal achievement. It calls into question whether goals should always be part of coaching and if values, identity, and purpose may play a greater role than goals.Keywords: coaching, goals, positive psychology, mindset, leadership, mental health, beliefs, cognition, emotional intelligence
Procedia PDF Downloads 1131005 Exploring Barriers to Social Innovation: Swedish Experiences from Nine Research Circles
Authors: Claes Gunnarsson, Karin Fröding, Nina Hasche
Abstract:
Innovation is a necessity for the evolution of societies and it is also a driving force in human life that leverages value creation among cross-sector participants in various network arrangements. Social innovations can be characterized as the creation and implementation of a new solution to a social problem, which is more effective and sustainable than existing solutions in terms of improvement of society’s conditions and in particular social inclusion processes. However, barriers exist which may restrict the potential of social innovations to live up to its promise as a societal welfare promoting driving force. The literature points at difficulties in tackling social problems primarily related to problem complexity, access to networks, and lack of financial muscles. Further research is warranted at detailed at detail clarification of these barriers, also connected to recognition of the interplay between institutional logics on the development of cross-sector collaborations in networks and the organizing processes to achieve innovation barrier break-through. There is also a need to further elaborate how obstacles that spur a difference between the actual and desired state of innovative value creating service systems can be overcome. The purpose of this paper is to illustrate barriers to social innovations, based on qualitative content analysis of 36 dialogue-based seminars (i.e. research circles) with nine Swedish focus groups including more than 90 individuals representing civil society organizations, private business, municipal offices, and politicians; and analyze patterns that reveal constituents of barriers to social innovations. The paper draws on central aspects of innovation barriers as discussed in the literature and analyze barriers basically related to internal/external and tangible/intangible characteristics. The findings of this study are that existing institutional structures highly influence the transformative potential of social innovations, as well as networking conditions in terms of building a competence-propelled strategy, which serves as an offspring for overcoming barriers of competence extension. Both theoretical and practical knowledge will contribute to how policy-makers and SI-practitioners can facilitate and support social innovation processes to be contextually adapted and implemented across areas and sectors.Keywords: barriers, research circles, social innovation, service systems
Procedia PDF Downloads 2571004 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 5301003 Library Support for the Intellectually Disabled: Book Clubs and Universal Design
Authors: Matthew Conner, Leah Plocharczyk
Abstract:
This study examines the role of academic libraries in support of the intellectually disabled (ID) in post-secondary education. With the growing public awareness of the ID, there has been recognition of their need for post-secondary educational opportunities. This was an unforeseen result for a population that has been associated with elementary levels of education, yet the reasons are compelling. After aging out of the school system, the ID need and deserve educational and social support as much as anyone. Moreover, the commitment to diversity in higher education rings hollow if this group is excluded. Yet, challenges remain to integrating the ID into a college curriculum. This presentation focuses on the role of academic libraries. Neglecting this vital resource for the support of the ID is not to be thought of, yet the library’s contribution is not clear. Library collections presume reading ability and libraries already struggle to meet their traditional goals with the resources available. This presentation examines how academic libraries can support post-secondary ID. For context, the presentation first examines the state of post-secondary education for the ID with an analysis of data on the United States compiled by the ThinkCollege! Project. Geographic Information Systems (GIS) and statistical analysis will show regional and methodological trends in post-secondary support of the ID which currently lack any significant involvement by college libraries. Then, the presentation analyzes a case study of a book club at the Florida Atlantic University (FAU) libraries which has run for several years. Issues such as the selection of books, effective pedagogies, and evaluation procedures will be examined. The study has found that the instruction pedagogies used by libraries can be extended through concepts of Universal Learning Design (ULD) to effectively engage the ID. In particular, student-centered, participatory methodologies that accommodate different learning styles have proven to be especially useful. The choice of text is complex and determined not only by reading ability but familiarity of subject and features of the ID’s developmental trajectory. The selection of text is not only a necessity but also promises to give insight into the ID. Assessment remains a complex and unresolved subject, but the voluntary, sustained, and enthusiastic attendance of the ID is an undeniable indicator. The study finds that, through the traditional library vehicle of the book club, academic libraries can support ID students through training in both reading and socialization, two major goals of their post-secondary education.Keywords: academic libraries, intellectual disability, literacy, post-secondary education
Procedia PDF Downloads 1631002 The Analysis of the Influence of Islamic Religiosity on Tax Morale among Self-Employed Taxpayers in Indonesia
Authors: Nurul Hidayat
Abstract:
Based on the data from the Indonesian Tax Authority, the contribution of self-employed taxpayers in Indonesia is just approximately 1-2 percent of total tax revenues during 2013 - 2015. This phenomenon requires greater attention to understand what factors that may affect it. The fact that Indonesia has the most prominent Muslim population in the world makes it important to analyze whether there potentially exists a correlation between Islamic religiosity and low tax contribution. The low level of tax contribution may provide an initial indication of low tax morale and tax compliance. This study will extend the existing literature by investigating the influence of Islamic religiosity as a moderating effect on the relationship between the perceptions of government legitimacy and tax morale among self-employed taxpayers. There are some factors to consider when taking into account the issue of Islamic religiosity and its relationship with tax morale in this study. Firstly, in Islam, there is a debate surrounding the lawfulness of tax. Some argue that Muslims should not have to pay tax; while others argue that the imposition of the tax is legitimate in circumstances. These views may have an impact on government legitimacy and tax morale. Secondly, according to Islamic sharia, Islam recognizes another compulsory payment, i.e. zakat, which to some extent has similar characteristics to tax. According to Indonesian Income Tax Law, zakat payment has just been accommodated as a deduction from taxable income. As a comparison, Malaysia treats zakat as a tax rebate. The treatment of zakat only as a taxable income deduction may also lead to a conflicting issue regarding the perception of tax fairness that possibly erode the perception of government legitimacy and tax morale. Based on the considerations above, perceptions of government legitimacy become important to influence the willingness of people to pay tax while the level of Islamic religiosity has a potential moderator effect on that correlation. In terms of measuring the relationship among the variables, this study utilizes mixed-quantitative and qualitative methods. The quantitative methods use surveys to approximately 400 targeted taxpayers while the qualitative methods employ in-depth interviews with 12 people, consist of experts, Islamic leaders and selected taxpayers. In particular, the research is being conducted in Indonesia, the country with the largest Muslim population in the world which has not fully implemented Islamic law as state law. The result indicates that Islamic religiosity becomes a moderating effect on the way taxpayers perceived government legitimacy that finally influences on tax morale. The findings of this study are supportive for the improvement of tax regulations by specifically considering tax deductions for zakat.Keywords: Islamic religiosity, tax morale, government legitimacy, zakat
Procedia PDF Downloads 240