Search results for: engagement prediction
583 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India
Authors: Sayantan Khanra, Rojers P. Joseph
Abstract:
Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions
Procedia PDF Downloads 267582 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil
Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz
Abstract:
Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.Keywords: biplane, oblique wing, sweep angle, supercritical airfoil
Procedia PDF Downloads 278581 Redefining Success Beyond Borders: A Deep Dive into Effective Methods to Boost Morale Among Virtual Workers for Exponential Project Performance
Authors: Florence Ibeh, David Oyewmi Oyekunle, David Boohene
Abstract:
The continuous advancement of information technology has completely transformed how businesses and organizations operate on a global scale. The widespread availability of virtual communication tools enables individuals to opt for remote work. While remote employment offers various benefits, such as facilitating corporate growth and enhancing customer support, it also presents distinct challenges. Therefore, investigating the intricacies of virtual team morale is crucial for ensuring the achievement of project objectives. For this study, content analysis of pre-existing secondary data was employed to examine the phenomenon. Essential elements vital for improving the success of projects within virtual teams were identified. These factors include technology adoption, creating a distraction-free work environment, effective leadership, trust-building, clear communication channels, well-defined task allocation, active team participation, and motivation. Furthermore, the study established a substantial correlation between morale levels and the participation and productivity of virtual team members. Higher levels of morale were associated with optimal performance among virtual teams. The study determined that the key factors for enhancing project performance in virtual teams are the adoption of technology, a focused environment, effective leadership, trust, communication, well-defined tasks, collaborative teamwork, and motivation. Additionally, the study discovered that modifying the optimal strategies employed by in-office teams can enhance the diminished morale prevalent in remote teams to sustain a high level of team morale for virtual teams. The findings of this study are highly significant in the dynamic field of project management. Currently, there is limited information regarding strategies that address challenges arising from external factors in virtual teams, such as ambient noise and disruptions caused by family members. The findings underscore the significance of selecting appropriate communication technologies, delineating distinct roles and responsibilities for virtual team members, and nurturing a culture of accountability and trust. Promoting seamless collaboration and instilling motivation among virtual team members are deemed highly effective in augmenting employee engagement and performance within virtual team setting.Keywords: virtual teams, morale, project performance, distract-free environment, technology adaptation
Procedia PDF Downloads 95580 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria
Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo
Abstract:
Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.Keywords: cost variability, construction projects, future studies, Nigeria
Procedia PDF Downloads 209579 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 175578 Electrochemical Bioassay for Haptoglobin Quantification: Application in Bovine Mastitis Diagnosis
Authors: Soledad Carinelli, Iñigo Fernández, José Luis González-Mora, Pedro A. Salazar-Carballo
Abstract:
Mastitis is the most relevant inflammatory disease in cattle, affecting the animal health and causing important economic losses on dairy farms. This disease takes place in the mammary gland or udder when some opportunistic microorganisms, such as Staphylococcus aureus, Streptococcus agalactiae, Corynebacterium bovis, etc., invade the teat canal. According to the severity of the inflammation, mastitis can be classified as sub-clinical, clinical and chronic. Standard methods for mastitis detection include counts of somatic cells, cell culture, electrical conductivity of the milk, and California test (evaluation of “gel-like” matrix consistency after cell lysed with detergents). However, these assays present some limitations for accurate detection of subclinical mastitis. Currently, haptoglobin, an acute phase protein, has been proposed as novel and effective biomarker for mastitis detection. In this work, an electrochemical biosensor based on polydopamine-modified magnetic nanoparticles (MNPs@pDA) for haptoglobin detection is reported. Thus, MNPs@pDA has been synthesized by our group and functionalized with hemoglobin due to its high affinity to haptoglobin protein. The protein was labeled with specific antibodies modified with alkaline phosphatase enzyme for its electrochemical detection using an electroactive substrate (1-naphthyl phosphate) by differential pulse voltammetry. After the optimization of assay parameters, the haptoglobin determination was evaluated in milk. The strategy presented in this work shows a wide range of detection, achieving a limit of detection of 43 ng/mL. The accuracy of the strategy was determined by recovery assays, being of 84 and 94.5% for two Hp levels around the cut off value. Milk real samples were tested and the prediction capacity of the electrochemical biosensor was compared with a Haptoglobin commercial ELISA kit. The performance of the assay has demonstrated this strategy is an excellent and real alternative as screen method for sub-clinical bovine mastitis detection.Keywords: bovine mastitis, haptoglobin, electrochemistry, magnetic nanoparticles, polydopamine
Procedia PDF Downloads 173577 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam
Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen
Abstract:
Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.Keywords: infectious disease, dengue, geospatial data, climate
Procedia PDF Downloads 383576 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 294575 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors
Authors: Chen Liu, Dawit Negussey
Abstract:
EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.Keywords: geofoam, pressure distribution, tactile pressure sensors, interface
Procedia PDF Downloads 173574 The 2017 Summer Campaign for Night Sky Brightness Measurements on the Tuscan Coast
Authors: Andrea Giacomelli, Luciano Massetti, Elena Maggi, Antonio Raschi
Abstract:
The presentation will report the activities managed during the Summer of 2017 by a team composed by staff from a University Department, a National Research Council Institute, and an outreach NGO, collecting measurements of night sky brightness and other information on artificial lighting, in order to characterize light pollution issues on portions of the Tuscan coast, in Central Italy. These activities combine measurements collected by the principal scientists, citizen science observations led by students, and outreach events targeting a broad audience. This campaign aggregates the efforts of three actors: the BuioMetria Partecipativa project, which started collecting light pollution data on a national scale in 2008 with an environmental engineering and free/open source GIS core team; the Institute of Biometeorology from the National Research Council, with ongoing studies on light and urban vegetation and a consolidated track record in environmental education and citizen science; the Department of Biology from the University of Pisa, which started experiments to assess the impact of light pollution in coastal environments in 2015. While the core of the activities concerns in situ data, the campaign will account also for remote sensing data, thus considering heterogeneous data sources. The aim of the campaign is twofold: (1) To test actions of citizen and student engagement in monitoring sky brightness (2) To collect night sky brightness data and test a protocol for applications to studies on the ecological impact of light pollution, with a special focus on marine coastal ecosystems. The collaboration of an interdisciplinary team in the study of artificial lighting issues is not a common case in Italy, and the possibility of undertaking the campaign in Tuscany has the added value of operating in one of the territories where it is possible to observe both sites with extremely high lighting levels, and areas with extremely low light pollution, especially in the Southern part of the region. Combining environmental monitoring and communication actions in the context of the campaign, this effort will contribute to the promotion of night skies with a good quality as an important asset for the sustainability of coastal ecosystems, as well as to increase citizen awareness through star gazing, night photography and actively participating in field campaign measurements.Keywords: citizen science, light pollution, marine coastal biodiversity, environmental education
Procedia PDF Downloads 173573 Evolutionary Prediction of the Viral RNA-Dependent RNA Polymerase of Chandipura vesiculovirus and Related Viral Species
Authors: Maneesh Kumar, Roshan Kamal Topno, Manas Ranjan Dikhit, Vahab Ali, Ganesh Chandra Sahoo, Bhawana, Major Madhukar, Rishikesh Kumar, Krishna Pandey, Pradeep Das
Abstract:
Chandipura vesiculovirus is an emerging (-) ssRNA viral entity belonging to the genus Vesiculovirus of the family Rhabdoviridae, associated with fatal encephalitis in tropical regions. The multi-functionally active viral RNA-dependent RNA polymerase (vRdRp) that has been incorporated with conserved amino acid residues in the pathogens, assigned to synthesize distinct viral polypeptides. The lack of proofreading ability of the vRdRp produces many mutated variants. Here, we have performed the evolutionary analysis of 20 viral protein sequences of vRdRp of different strains of Chandipura vesiculovirus along with other viral species from genus Vesiculovirus inferred in MEGA6.06, employing the Neighbour-Joining method. The p-distance algorithmic method has been used to calculate the optimum tree which showed the sum of branch length of about 1.436. The percentage of replicate trees in which the associated taxa are clustered together in the bootstrap test (1000 replicates), is shown next to the branches. No mutation was observed in the Indian strains of Chandipura vesiculovirus. In vRdRp, 1230(His) and 1231(Arg) are actively participated in catalysis and, are found conserved in different strains of Chandipura vesiculovirus. Both amino acid residues were also conserved in the other viral species from genus Vesiculovirus. Many isolates exhibited maximum number of mutations in catalytic regions in strains of Chandipura vesiculovirus at position 26(Ser→Ala), 47 (Ser→Ala), 90(Ser→Tyr), 172(Gly→Ile, Val), 172(Ser→Tyr), 387(Asn→Ser), 1301(Thr→Ala), 1330(Ala→Glu), 2015(Phe→Ser) and 2065(Thr→Val) which make them variants under different tropical conditions from where they evolved. The result clarifies the actual concept of RNA evolution using vRdRp to develop as an evolutionary marker. Although, a limited number of vRdRp protein sequence similarities for Chandipura vesiculovirus and other species. This might endow with possibilities to identify the virulence level during viral multiplication in a host.Keywords: Chandipura, (-) ssRNA, viral RNA-dependent RNA polymerase, neighbour-joining method, p-distance algorithmic, evolutionary marker
Procedia PDF Downloads 197572 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 279571 Sustainability from Ecocity to Ecocampus: An Exploratory Study on Spanish Universities' Water Management
Authors: Leyla A. Sandoval Hamón, Fernando Casani
Abstract:
Sustainability has been integrated into the cities’ agenda due to the impact that they generate. The dimensions of greater proliferation of sustainability, which are taken as a reference, are economic, social and environmental. Thus, the decisions of management of the sustainable cities search a balance between these dimensions in order to provide environment-friendly alternatives. In this context, urban models (where water consumption, energy consumption, waste production, among others) that have emerged in harmony with the environment, are known as Ecocity. A similar model, but on a smaller scale, is ‘Ecocampus’ that is developed in universities (considered ‘small cities’ due to its complex structure). So, sustainable practices are being implemented in the management of university campus activities, following different relevant lines of work. The universities have a strategic role in society, and their activities can strengthen policies, strategies, and measures of sustainability, both internal and external to the organization. Because of their mission in knowledge creation and transfer, these institutions can promote and disseminate more advanced activities in sustainability. This model replica also implies challenges in the sustainable management of water, energy, waste, transportation, among others, inside the campus. The challenge that this paper focuses on is the water management, taking into account that the universities consume big amounts of this resource. The purpose of this paper is to analyze the sustainability experience, with emphasis on water management, of two different campuses belonging to two different Spanish universities - one urban campus in a historic city and the other a suburban campus in the outskirts of a large city. Both universities are in the top hundred of international rankings of sustainable universities. The methodology adopts a qualitative method based on the technique of in-depth interviews and focus-group discussions with administrative and academic staff of the ‘Ecocampus’ offices, the organizational units for sustainability management, from the two Spanish universities. The hypotheses indicate that sustainable policies in terms of water management are best in campuses without big green spaces and where the buildings are built or rebuilt with modern style. The sustainability efforts of the university are independent of the kind of (urban – suburban) campus but an important aspect to improve is the degree of awareness of the university community about water scarcity. In general, the paper suggests that higher institutions adapt their sustainability policies depending on the location and features of the campus and their engagement with the water conservation. Many Spanish universities have proposed policies, good practices, and measures of sustainability. In fact, some offices or centers of Ecocampus have been founded. The originality of this study is to learn from the different experiences of sustainability policies of universities.Keywords: ecocampus, ecocity, sustainability, water management
Procedia PDF Downloads 221570 Through the Robot’s Eyes: A Comparison of Robot-Piloted, Virtual Reality, and Computer Based Exposure for Fear of Injections
Authors: Bonnie Clough, Tamara Ownsworth, Vladimir Estivill-Castro, Matt Stainer, Rene Hexel, Andrew Bulmer, Wendy Moyle, Allison Waters, David Neumann, Jayke Bennett
Abstract:
The success of global vaccination programs is reliant on the uptake of vaccines to achieve herd immunity. Yet, many individuals do not obtain vaccines or venipuncture procedures when needed. Whilst health education may be effective for those individuals who are hesitant due to safety or efficacy concerns, for many of these individuals, the primary concern relates to blood or injection fear or phobia (BII). BII is highly prevalent and associated with a range of negative health impacts, both at individual and population levels. Exposure therapy is an efficacious treatment for specific phobias, including BII, but has high patient dropout and low implementation by therapists. Whilst virtual reality approaches exposure therapy may be more acceptable, they have similarly low rates of implementation by therapists and are often difficult to tailor to an individual client’s needs. It was proposed that a piloted robot may be able to adequately facilitate fear induction and be an acceptable approach to exposure therapy. The current study examined fear induction responses, acceptability, and feasibility of a piloted robot for BII exposure. A Nao humanoid robot was programmed to connect with a virtual reality head-mounted display, enabling live streaming and exploration of real environments from a distance. Thirty adult participants with BII fear were randomly assigned to robot-pilot or virtual reality exposure conditions in a laboratory-based fear exposure task. All participants also completed a computer-based two-dimensional exposure task, with an order of conditions counterbalanced across participants. Measures included fear (heart rate variability, galvanic skin response, stress indices, and subjective units of distress), engagement with a feared stimulus (eye gaze: time to first fixation and a total number of fixations), acceptability, and perceived treatment credibility. Preliminary results indicate that fear responses can be adequately induced via a robot-piloted platform. Further results will be discussed, as will implications for the treatment of BII phobia and other fears. It is anticipated that piloted robots may provide a useful platform for facilitating exposure therapy, being more acceptable than in-vivo exposure and more flexible than virtual reality exposure.Keywords: anxiety, digital mental health, exposure therapy, phobia, robot, virtual reality
Procedia PDF Downloads 77569 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 89568 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 156567 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams
Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname
Abstract:
Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams
Procedia PDF Downloads 139566 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies
Authors: Tania Viju, Bimal P., Naseer M. A.
Abstract:
This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.Keywords: decision support system, dynamic management, road accident blackspots, road safety
Procedia PDF Downloads 144565 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event
Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette
Abstract:
Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas
Procedia PDF Downloads 77564 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 7563 The Role of Motivational Beliefs and Self-Regulated Learning Strategies in The Prediction of Mathematics Teacher Candidates' Technological Pedagogical And Content Knowledge (TPACK) Perceptions
Authors: Ahmet Erdoğan, Şahin Kesici, Mustafa Baloğlu
Abstract:
Information technologies have lead to changes in the areas of communication, learning, and teaching. Besides offering many opportunities to the learners, these technologies have changed the teaching methods and beliefs of teachers. What the Technological Pedagogical Content Knowledge (TPACK) means to the teachers is considerably important to integrate technology successfully into teaching processes. It is necessary to understand how to plan and apply teacher training programs in order to balance students’ pedagogical and technological knowledge. Because of many inefficient teacher training programs, teachers have difficulties in relating technology, pedagogy and content knowledge each other. While providing an efficient training supported with technology, understanding the three main components (technology, pedagogy and content knowledge) and their relationship are very crucial. The purpose of this study is to determine whether motivational beliefs and self-regulated learning strategies are significant predictors of mathematics teacher candidates' TPACK perceptions. A hundred seventy five Turkish mathematics teachers candidates responded to the Motivated Strategies for Learning Questionnaire (MSLQ) and the Technological Pedagogical And Content Knowledge (TPACK) Scale. Of the group, 129 (73.7%) were women and 46 (26.3%) were men. Participants' ages ranged from 20 to 31 years with a mean of 23.04 years (SD = 2.001). In this study, a multiple linear regression analysis was used. In multiple linear regression analysis, the relationship between the predictor variables, mathematics teacher candidates' motivational beliefs, and self-regulated learning strategies, and the dependent variable, TPACK perceptions, were tested. It was determined that self-efficacy for learning and performance and intrinsic goal orientation are significant predictors of mathematics teacher candidates' TPACK perceptions. Additionally, mathematics teacher candidates' critical thinking, metacognitive self-regulation, organisation, time and study environment management, and help-seeking were found to be significant predictors for their TPACK perceptions.Keywords: candidate mathematics teachers, motivational beliefs, self-regulated learning strategies, technological and pedagogical knowledge, content knowledge
Procedia PDF Downloads 482562 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments
Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic
Abstract:
Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).Keywords: Croatia, forest fire, geospatial analysis, hydrological response
Procedia PDF Downloads 136561 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 93560 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations
Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.
Abstract:
Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.Keywords: gamma incomplete, ewes, shape curves, modeling
Procedia PDF Downloads 78559 Identification of the Expression of Top Deregulated MiRNAs in Rheumatoid Arthritis and Osteoarthritis
Authors: Hala Raslan, Noha Eltaweel, Hanaa Rasmi, Solaf Kamel, May Magdy, Sherif Ismail, Khalda Amr
Abstract:
Introduction: Rheumatoid arthritis (RA) is an inflammatory, autoimmune disorder with progressive joint damage. Osteoarthritis (OA) is a degenerative disease of the articular cartilage that shows multiple clinical manifestations or symptoms resembling those of RA. Genetic predisposition is believed to be a principal etiological factor for RA and OA. In this study, we aimed to measure the expression of the top deregulated miRNAs that might be the cause of pathogenesis in both diseases, according to our latest NGS analysis. Six of the deregulated miRNAs were selected as they had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis.Methods: Eighty cases were recruited in this study; 45 rheumatoid arthiritis (RA), 30 osteoarthiritis (OA) patients, as well as 20 healthy controls. The selection of the miRNAs from our latest NGS study was done using miRwalk according to the number of their target genes that are members in the KEGG RA pathway. Total RNA was isolated from plasma of all recruited cases. The cDNA was generated by the miRcury RT Kit then used as a template for real-time PCR with miRcury Primer Assays and the miRcury SYBR Green PCR Kit. Fold changes were calculated from CT values using the ΔΔCT method of relative quantification. Results were compared RA vs Controls and OA vs Controls. Target gene prediction and functional annotation of the deregulated miRNAs was done using Mienturnet. Results: Six miRNAs were selected. They were miR-15b-3p, -128-3p, -194-3p, -328-3p, -542-3p and -3180-5p. In RA samples, three of the measured miRNAs were upregulated (miR-194, -542, and -3180; mean Rq= 2.6, 3.8 and 8.05; P-value= 0.07, 0.05 and 0.01; respectively) while the remaining 3 were downregulated (miR-15b, -128 and -328; mean Rq= 0.21, 0.39 and 0.6; P-value= <0.0001, <0.0001 and 0.02; respectively) all with high statistical significance except miR-194. While in OA samples, two of the measured miRNAs were upregulated (miR-194 and -3180; mean Rq= 2.6 and 7.7; P-value= 0.1 and 0.03; respectively) while the remaining 4 were downregulated (miR-15b, -128, -328 and -542; mean Rq= 0.5, 0.03, 0.08 and 0.5; P-value= 0.0008, 0.003, 0.006 and 0.4; respectively) with statistical significance compared to controls except miR-194 and miR-542. The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Conclusion: Five of the studied miRNAs were greatly deregulated in RA and OA, they might be highly involved in the disease pathogenesis and so might be future therapeutic targets. Further functional studies are crucial to assess their roles and actual target genes.Keywords: MiRNAs, expression, rheumatoid arthritis, osteoarthritis
Procedia PDF Downloads 79558 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing
Authors: Muhammad Abdul Basit Memon
Abstract:
The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing
Procedia PDF Downloads 324557 Importance of Prostate Volume, Prostate Specific Antigen Density and Free/Total Prostate Specific Antigen Ratio for Prediction of Prostate Cancer
Authors: Aliseydi Bozkurt
Abstract:
Objectives: Benign prostatic hyperplasia (BPH) is the most common benign disease, and prostate cancer (PC) is malign disease of the prostate gland. Transrectal ultrasound-guided biopsy (TRUS-bx) is one of the most important diagnostic tools in PC diagnosis. Identifying men at increased risk for having a biopsy detectable prostate cancer should consider prostate specific antigen density (PSAD), f/t PSA Ratio, an estimate of prostate volume. Method: We retrospectively studied 269 patients who had a prostate specific antigen (PSA) score of 4 or who had suspected rectal examination at any PSA level and received TRUS-bx between January 2015 and June 2018 in our clinic. TRUS-bx was received by 12 experienced urologists with 12 quadrants. Prostate volume was calculated prior to biopsy together with TRUS. Patients were classified as malignant and benign at the end of pathology. Age, PSA value, prostate volume in transrectal ultrasonography, corpuscle biopsy, biopsy pathology result, the number of cancer core and Gleason score were evaluated in the study. The success rates of PV, PSAD, and f/tPSA were compared in all patients and those with PSA 2.5-10 ng/mL and 10.1-30 ng/mL tp foresee prostate cancer. Result: In the present study, in patients with PSA 2.5-10 ng/ml, PV cut-off value was 43,5 mL (n=42 < 43,5 mL and n=102 > 43,5 mL) while in those with PSA 10.1-30 ng/mL prostate volüme (PV) cut-off value was found 61,5 mL (n=31 < 61,5 mL and n=36 > 61,5 mL). Total PSA values in the group with PSA 2.5-10 ng/ml were found lower (6.0 ± 1.3 vs 6.7 ± 1.7) than that with PV < 43,5 mL, this value was nearly significant (p=0,043). In the group with PSA value 10.1-30 ng/mL, no significant difference was found (p=0,117) in terms of total PSA values between the group with PV < 61,5 mL and that with PV > 61,5 mL. In the group with PSA 2.5-10 ng/ml, in patients with PV < 43,5 mL, f/t PSA value was found significantly lower compared to the group with PV > 43,5 mL (0.21 ± 0.09 vs 0.26 ± 0.09 p < 0.001 ). Similarly, in the group with PSA value of 10.1-30 ng/mL, f/t PSA value was found significantly lower in patients with PV < 61,5 mL (0.16 ± 0.08 vs 0.23 ± 0.10 p=0,003). In the group with PSA 2.5-10 ng/ml, PSAD value in patients with PV < 43,5 mL was found significantly higher compared to those with PV > 43,5 mL (0.17 ± 0.06 vs 0.10 ± 0.03 p < 0.001). Similarly, in the group with PSA value 10.1-30 ng/mL PSAD value was found significantly higher in patients with PV < 61,5 mL (0.47 ± 0.23 vs 0.17 ± 0.08 p < 0.001 ). The biopsy results suggest that in the group with PSA 2.5-10 ng/ml, in 29 of the patients with PV < 43,5 mL (69%) cancer was detected while in 13 patients (31%) no cancer was detected. While in 19 patients with PV > 43,5 mL (18,6%) cancer was found, in 83 patients (81,4%) no cancer was detected (p < 0.001). In the group with PSA value 10.1-30 ng/mL, in 21 patients with PV < 61,5 mL (67.7%) cancer was observed while only in10 patients (32.3%) no cancer was seen. In 5 patients with PV > 61,5 mL (13.9%) cancer was found while in 31 patients (86.1%) no cancer was observed (p < 0.001). Conclusions: Identifying men at increased risk for having a biopsy detectable prostate cancer should consider PSA, f/t PSA Ratio, an estimate of prostate volume. Prostate volume in PC was found lower.Keywords: prostate cancer, prostate volume, prostate specific antigen, free/total PSA ratio
Procedia PDF Downloads 150556 Assessment of the Effects of Urban Development on Urban Heat Islands and Community Perception in Semi-Arid Climates: Integrating Remote Sensing, GIS Tools, and Social Analysis - A Case Study of the Aures Region (Khanchela), Algeria
Authors: Amina Naidja, Zedira Khammar, Ines Soltani
Abstract:
This study investigates the impact of urban development on the urban heat island (UHI) effect in the semi-arid Aures region of Algeria, integrating remote sensing data with statistical analysis and community surveys to examine the interconnected environmental and social dynamics. Using Landsat 8 satellite imagery, temporal variations in the Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), and land use/land cover (LULC) changes are analyzed to understand patterns of urbanization and environmental transformation. These environmental metrics are correlated with land surface temperature (LST) data derived from remote sensing to quantify the UHI effect. To incorporate the social dimension, a structured questionnaire survey is conducted among residents in selected urban areas. The survey assesses community perceptions of urban heat, its impacts on daily life, health concerns, and coping strategies. Statistical analysis is employed to analyze survey responses, identifying correlations between demographic factors, socioeconomic status, and perceived heat stress. Preliminary findings reveal significant correlations between built-up areas (NDBI) and higher LST, indicating the contribution of urbanization to local warming. Conversely, areas with higher vegetation cover (NDVI) exhibit lower LST, highlighting the cooling effect of green spaces. Social survey results provide insights into how UHI affects different demographic groups, with vulnerable populations experiencing greater heat-related challenges. By integrating remote sensing analysis with statistical modeling and community surveys, this study offers a comprehensive understanding of the environmental and social implications of urban development in semi-arid climates. The findings contribute to evidence-based urban planning strategies that prioritize environmental sustainability and social well-being. Future research should focus on policy recommendations and community engagement initiatives to mitigate UHI impacts and promote climate-resilient urban development.Keywords: urban heat island, remote sensing, social analysis, NDVI, NDBI, LST, community perception
Procedia PDF Downloads 41555 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 74554 Building a Framework for Digital Emergency Response System for Aged, Long Term Care and Chronic Disease Patients in Asia Pacific Region
Authors: Nadeem Yousuf Khan
Abstract:
This paper proposes the formation of a digital emergency response system (dERS) in the aged, long-term care, and chronic disease setups in the post-COVID healthcare ecosystem, focusing on the Asia Pacific market where the aging population is increasing significantly. It focuses on the use of digital technologies such as wearables, a global positioning system (GPS), and mobile applications to build an integrated care system for old folks with co-morbidities and other chronic diseases. The paper presents a conceptual framework of a connected digital health ecosystem that not only provides proactive care to registered patients but also prevents the damages due to sudden conditions such as strokes by alerting and treating the patients in a digitally connected and coordinated manner. A detailed review of existing digital health technologies such as wearables, GPS, and mobile apps was conducted in context with the new post-COVID healthcare paradigm, along with a detailed literature review on the digital health policies and usability. A good amount of research papers is available in the application of digital health, but very few of them discuss the formation of a new framework for a connected digital ecosystem for the aged care population, which is increasing around the globe. A connected digital emergency response system has been proposed by the author whereby all registered patients (chronic disease and aged/long term care) will be connected to the proposed digital emergency response system (dERS). In the proposed ecosystem, patients will be provided with a tracking wrist band and a mobile app through which the control room will be monitoring the mobility and vitals such as atrial fibrillation (AF), blood sugar, blood pressure, and other vital signs. In addition to that, an alert in case if the patient falls down will add value to this system. In case of any variation in the vitals, an alert is sent to the dERS 24/7, and dERS clinical staff immediately trigger that alert which goes to the connected hospital and the adulatory service providers, and the patient is escorted to the nearest connected tertiary care hospital. By the time, the patient reaches the hospital, dERS team is ready to take appropriate clinical action to save the life of the patient. Strokes or myocardial infarction patients can be prevented from disaster if they are accessible to engagement healthcare. This dERS will play an effective role in saving the lives of aged patients or patients with chronic co-morbidities.Keywords: aged care, atrial fibrillation, digital health, digital emergency response system, digital technology
Procedia PDF Downloads 122