Search results for: large scale maps
11679 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 16211678 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling
Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche
Abstract:
High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations
Procedia PDF Downloads 43711677 Topographic Mapping of Farmland by Integration of Multiple Sensors on Board Low-Altitude Unmanned Aerial System
Authors: Mengmeng Du, Noboru Noguchi, Hiroshi Okamoto, Noriko Kobayashi
Abstract:
This paper introduced a topographic mapping system with time-saving and simplicity advantages based on integration of Light Detection and Ranging (LiDAR) data and Post Processing Kinematic Global Positioning System (PPK GPS) data. This topographic mapping system used a low-altitude Unmanned Aerial Vehicle (UAV) as a platform to conduct land survey in a low-cost, efficient, and totally autonomous manner. An experiment in a small-scale sugarcane farmland was conducted in Queensland, Australia. Subsequently, we synchronized LiDAR distance measurements that were corrected by using attitude information from gyroscope with PPK GPS coordinates for generation of precision topographic maps, which could be further utilized for such applications like precise land leveling and drainage management. The results indicated that LiDAR distance measurements and PPK GPS altitude reached good accuracy of less than 0.015 m.Keywords: land survey, light detection and ranging, post processing kinematic global positioning system, precision agriculture, topographic map, unmanned aerial vehicle
Procedia PDF Downloads 23611676 Molecular Dynamics Simulation of Free Vibration of Graphene Sheets
Authors: Seyyed Feisal Asbaghian Namin, Reza Pilafkan, Mahmood Kaffash Irzarahimi
Abstract:
TThis paper considers vibration of single-layered graphene sheets using molecular dynamics (MD) and nonlocal elasticity theory. Based on the MD simulations, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), an open source software, is used to obtain fundamental frequencies. On the other hand, governing equations are derived using nonlocal elasticity and first order shear deformation theory (FSDT) and solved using generalized differential quadrature method (GDQ). The small-scale effect is applied in governing equations of motion by nonlocal parameter. The effect of different side lengths, boundary conditions and nonlocal parameter are inspected for aforementioned methods. Results are obtained from MD simulations is compared with those of the nonlocal elasticity theory to calculate appropriate values for the nonlocal parameter. The nonlocal parameter value is suggested for graphene sheets with various boundary conditions. Furthermore, it is shown that the nonlocal elasticity approach using classical plate theory (CLPT) assumptions overestimates the natural frequencies.Keywords: graphene sheets, molecular dynamics simulations, fundamental frequencies, nonlocal elasticity theory, nonlocal parameter
Procedia PDF Downloads 52111675 Artificial Intelligence and Distributed System Computing: Application and Practice in Real Life
Authors: Lai Junzhe, Wang Lihao, Burra Venkata Durga Kumar
Abstract:
In recent years, due to today's global technological advances, big data and artificial intelligence technologies have been widely used in various industries and fields, playing an important role in reducing costs and increasing efficiency. Among them, artificial intelligence has derived another branch in its own continuous progress and the continuous development of computer personnel, namely distributed artificial intelligence computing systems. Distributed AI is a method for solving complex learning, decision-making, and planning problems, characterized by the ability to take advantage of large-scale computation and the spatial distribution of resources, and accordingly, it can handle problems with large data sets. Nowadays, distributed AI is widely used in military, medical, and human daily life and brings great convenience and efficient operation to life. In this paper, we will discuss three areas of distributed AI computing systems in vision processing, blockchain, and smart home to introduce the performance of distributed systems and the role of AI in distributed systems.Keywords: distributed system, artificial intelligence, blockchain, IoT, visual information processing, smart home
Procedia PDF Downloads 11311674 Development of a Nanocompound Based Fibre to Combat Insects
Authors: Merle Bischoff, Thomas Gries, Gunnar Seide
Abstract:
Pesticides, which harm crop enemies, but can also interfere with the human body, are nowadays mostly used for crop spraying. Silica particles (SiO2) in the nanometer and micrometer scale offer a physical way to combat insects without harming humans and other mammals. Thereby, they allow foregoing pesticides, which can harm the environment. As silica particles are supplied as a powder or in a suspension to farmers, the silica use in large scale agriculture is not sufficient due to erosion through wind and rain. When silica is implemented in a textile’s surface (nanocompound), particles are locally bound and do resist erosion, but can function against bugs. By choosing polypropylene as a matrix polymer, the production of an inexpensive agritextile with an 'anti-bug' effect is made possible. In the Symposium the results of the manufacturing and filament spinning of silica nanocomposites from a polypropylene basis is compared to the fabrication from nanocomposites based on Polybutylene succinate, a biodegradable composite. The investigation focuses on the difference between degradable nanocomposite and stable nanocomposite. Focus will be laid on the filament characteristics as well as the degradation of the nanocompound to underline their potential use and application as an agricultural textile.Keywords: agriculture, environment, insects, protection, silica, textile, nanocomposite
Procedia PDF Downloads 24911673 Effect of Low Level Laser Therapy versus Polarized Light Therapy on Oral Mucositis in Cancer Patients Receiving Chemotherapy
Authors: Andrew Anis Fakhrey Mosaad
Abstract:
The goal of this study is to compare the efficacy of polarised light therapy with low-intensity laser therapy in treating oral mucositis brought on by chemotherapy in cancer patients. Evaluation procedures are the measurement of the WHO oral mucositis scale and the Common toxicity criteria scale. Techniques: Cancer patients (men and women) who had oral mucositis, ulceration, and discomfort and whose ages varied from 30 to 55 years were separated into two groups and received 40 chemotherapy treatments. Twenty patients in Group (A) received low-level laser therapy (LLLT) along with their regular oral mucositis medication treatment, while twenty patients in Group (B) received Bioptron light therapy (BLT) along with their regular oral mucositis medication treatment. Both treatments were applied for 10 minutes each day for 30 days. Conclusion and results: This study showed that the use of both BLT and LLLT on oral mucositis in cancer patients following chemotherapy greatly improved, as seen by the sharp falls in both the WHO oral mucositis scale (OMS) and the common toxicity criteria scale (CTCS). However, low-intensity laser therapy (LLLT) was superior to Bioptron light therapy in terms of benefits (BLT).Keywords: Bioptron light therapy, low level laser therapy, oral mucositis, WHO oral mucositis scale, common toxicity criteria scale
Procedia PDF Downloads 24611672 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics
Procedia PDF Downloads 29311671 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion
Authors: Radim Sip, Denisa Denglerova
Abstract:
It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion
Procedia PDF Downloads 14811670 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis
Procedia PDF Downloads 16211669 Distribution Patterns of Trace Metals in Soils of Gbongan-Odeyinka-Orileowu Area, Southwestern Nigeria
Authors: T. A. Adesiyan, J. A. Adekoya A. Akinlua, N. Torto
Abstract:
One hundred and eighty six in situ soil samples of the B–horizon were collected around Gbongan–Odeyinka-Orileowu area, southwestern Nigeria, delineated by longitude 4°15l and 4°30l and latitude 7°14l and 7°31 for a reconnaissance geochemical soil survey. The objective was to determine the distribution pattern of some trace metals in the area with a view to discovering any indication of metallic mineralization. The samples were air–dried and sieved to obtain the minus 230 µ fractions which were used for pH determinations and subjected to hot aqua regia acid digestion. The solutions obtained were analyzed for Ag, As, Au, Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, Sn, and Zn using atomic absorption spectrometric methods. The resulting data were subjected to simple statistical treatment and used in preparing distribution maps of the elements. With these, the spatial distributions of the elements in the area were discussed. The pH of the soils range from 4.70 to 7.59 and this reflects the geochemical distribution patterns of trace metals in the area. The spatial distribution maps of the elements showed similarity in the distributions of Co, Cr, Fe, Ni, Mn and Pb. This suggests close associations between these elements none of which showed any significant anomaly in the study. The associations might be due to the scavenging actions of Fe–Mn oxides on the elements. Only Ag, Au and Sn on one hand and Zn on the other hand showed significant anomalies, which are thought to be due to mineralization and anthropogenic activities respectively.Keywords: distribution, metals, Gbongan, Nigeria, mineralization anthropogenic
Procedia PDF Downloads 32211668 Influence of the Coarse-Graining Method on a DEM-CFD Simulation of a Pilot-Scale Gas Fluidized Bed
Authors: Theo Ndereyimana, Yann Dufresne, Micael Boulet, Stephane Moreau
Abstract:
The DEM (Discrete Element Method) is used a lot in the industry to simulate large-scale flows of particles; for instance, in a fluidized bed, it allows to predict of the trajectory of every particle. One of the main limits of the DEM is the computational time. The CGM (Coarse-Graining Method) has been developed to tackle this issue. The goal is to increase the size of the particle and, by this means, decrease the number of particles. The method leads to a reduction of the collision frequency due to the reduction of the number of particles. Multiple characteristics of the particle movement and the fluid flow - when there is a coupling between DEM and CFD (Computational Fluid Dynamics). The main characteristic that is impacted is the energy dissipation of the system, to regain the dissipation, an ADM (Additional Dissipative Mechanism) can be added to the model. The objective of this current work is to observe the influence of the choice of the ADM and the factor of coarse-graining on the numerical results. These results will be compared with experimental results of a fluidized bed and with a numerical model of the same fluidized bed without using the CGM. The numerical model is one of a 3D cylindrical fluidized bed with 9.6M Geldart B-type particles in a bubbling regime.Keywords: additive dissipative mechanism, coarse-graining, discrete element method, fluidized bed
Procedia PDF Downloads 7011667 Cloning and Expression of Azurin: A Protein Having Antitumor and Cell Penetrating Ability
Authors: Mohsina Akhter
Abstract:
Cancer has become a wide spread disease around the globe and takes many lives every year. Different treatments are being practiced but all have potential side effects with somewhat less specificity towards target sites. Pseudomonas aeruginosa is known to secrete a protein azurin with special anti-cancer function. It has unique cell penetrating peptide comprising of 18 amino acids that have ability to enter cancer cells specifically. Reported function of Azurin is to stabilize p53 inside the tumor cells and induces apoptosis through Bax mediated cytochrome c release from mitochondria. At laboratory scale, we have made recombinant azurin through cloning rpTZ57R/T-azu vector into E.coli strain DH-5α and subcloning rpET28-azu vector into E.coli BL21-CodonPlus (DE3). High expression was ensured with IPTG induction at different concentrations then optimized high expression level at 1mM concentration of IPTG for 5 hours. Purification has been done by using Ni+2 affinity chromatography. We have concluded that azurin can be a remarkable improvement in cancer therapeutics if it produces on a large scale. Azurin does not enter into the normal cells so it will prove a safe and secure treatment for patients and prevent them from hazardous anomalies.Keywords: azurin, pseudomonas aeruginosa, cancer, therapeutics
Procedia PDF Downloads 31111666 The Effects of Self-Efficacy on Life Satisfaction
Authors: Gao ya
Abstract:
This present study aims to find the relationship between self-efficacy and life satisfaction and the effects of self-efficacy on life satisfaction among Chinese people whose age is from 27-32, born between 1990 and 1995. People who were born between 1990 and 1995 are worthy to receive more attention now because the 90s was always received a lot of focus and labeled negatively as soon as they were born. And a large number of researches study people in individualism society more. So we chose the specific population whose age is from 27 to 32 live in a collectivist society. Demographic information was collected, including age, gender, education level, marital status, income level, number of children. We used the general self-efficacy scale(GSC) and the satisfaction with Life Scale(SLS) to collect data. A total of 350 questionnaires were distributed in and collected from mainland China, then 261 valid questionnaires were returned in the end, making a response rate of 74.57 percent. Some statistics techniques were used, like regression, correlation, ANOVA, T-test and general linear model, to measure variables. The findings were that self-efficacy positively related to life satisfaction. And self-efficacy influences life satisfaction significantly. At the same time, the relationship between demographic information and life satisfaction was analyzed.Keywords: marital status, life satisfaction, number of children, self-efficacy, income level
Procedia PDF Downloads 12111665 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development
Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan
Abstract:
Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.Keywords: cost, feasibility, rural electrification, small-scale hydropower
Procedia PDF Downloads 22411664 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 15711663 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 12211662 Chongqing, a Megalopolis Disconnected with Its Rivers: An Assessment of Urban-Waterside Disconnect in a Chinese Megacity and Proposed Improvement Strategies, Chongqing City as a Case Study
Authors: Jaime E. Salazar Lagos
Abstract:
Chongqing is located in southwest China and is becoming one of the most significant cities in the world. Its urban territories and metropolitan-related areas have one of the largest urban populations in China and are partitioned and shaped by two of the biggest and longest rivers on Earth, the Yangtze and Jialing Rivers, making Chongqing a megalopolis intersected by rivers. Historically, Chongqing City enjoyed fundamental connections with its rivers; however, current urban development of Chongqing City has lost effective integration of the riverbanks within the urban space and structural dynamics of the city. Therefore, there exists a critical lack of physical and urban space conjoined with the rivers, which diminishes the economic, tourist, and environmental development of Chongqing. Using multi-scale satellite-map site verification the study confirmed the hypothesis and urban-waterside disconnect. Collected data demonstrated that the Chongqing urban zone, an area of 5292 square-kilometers and a water front of 203.4 kilometers, has only 23.49 kilometers of extension (just 11.5%) with high-quality physical and spatial urban-waterside connection. Compared with other metropolises around the world, this figure represents a significant lack of spatial development along the rivers, an issue that has not been successfully addressed in the last 10 years of urban development. On a macro scale, the study categorized the different kinds of relationships between the city and its riverbanks. This data was then utilized in the creation of an urban-waterfront relationship map that can be a tool for future city planning decisions and real estate development. On a micro scale, we discovered there are three primary elements that are causing the urban-waterside disconnect: extensive highways along the most dense areas and city center, large private real estate developments that do not provide adequate riverside access, and large industrial complexes that almost completely lack riverside utilization. Finally, as part of the suggested strategies, the study concludes that the most efficient and practical way to improve this situation is to follow the historic master-planning of Chongqing and create connective nodes in critical urban locations along the river, a strategy that has been used for centuries to handle the same urban-waterside relationship. Reviewing and implementing this strategy will allow the city to better connect with the rivers, reducing the various impacts of disconnect and urban transformation.Keywords: Chongqing City, megalopolis, nodes, riverbanks disconnection, urban
Procedia PDF Downloads 22611661 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 27011660 An Evaluative Approach for Successful Implementation of Lean and Green Manufacturing in Indian SMEs
Authors: Satya S. N. Narayana, P. Parthiban, T. Niranjan, N. Kannan
Abstract:
Enterprises adopt methodologies to increase their business performance and to stay competent in the volatile global market. Lean manufacturing is one such manufacturing paradigm which focuses on reduction of cost by elimination of wastes or non-value added activities. With increased awareness about social responsibility and the necessary to meet the terms of the environmental policy, green manufacturing is becoming increasingly important for industries. Large plants have more resources, have started implementing lean and green practices and they are getting good results. Small and medium scale enterprises (SMEs) are facing problems in implementing lean and green concept. This paper aims to identify the key issues for implementation of lean and green concept in Indian SMEs. The key factors identified based on literature review and expert opinions are grouped into different levels by Modified Interpretive Structural Modeling (MISM) to explore the importance among the factors to implement lean and green manufacturing. Finally, Fuzzy Analytic Network Process (FANP) method has been used to determine the extent to which the main principles of lean and green manufacturing have been carried out in the six Indian medium scale manufacturing industries.Keywords: lean manufacturing, green manufacturing, MISM, FANP
Procedia PDF Downloads 54111659 A Network of Nouns and Their Features :A Neurocomputational Study
Authors: Skiker Kaoutar, Mounir Maouene
Abstract:
Neuroimaging studies indicate that a large fronto-parieto-temporal network support nouns and their features, with some areas store semantic knowledge (visual, auditory, olfactory, gustatory,…), other areas store lexical representation and other areas are implicated in general semantic processing. However, it is not well understood how this fronto-parieto-temporal network can be modulated by different semantic tasks and different semantic relations between nouns. In this study, we combine a behavioral semantic network, functional MRI studies involving object’s related nouns and brain network studies to explain how different semantic tasks and different semantic relations between nouns can modulate the activity within the brain network of nouns and their features. We first describe how nouns and their features form a large scale brain network. For this end, we examine the connectivities between areas recruited during the processing of nouns to know which configurations of interaction areas are possible. We can thus identify if, for example, brain areas that store semantic knowledge communicate via functional/structural links with areas that store lexical representations. Second, we examine how this network is modulated by different semantic tasks involving nouns and finally, we examine how category specific activation may result from the semantic relations among nouns. The results indicate that brain network of nouns and their features is highly modulated and flexible by different semantic tasks and semantic relations. At the end, this study can be used as a guide to help neurosientifics to interpret the pattern of fMRI activations detected in the semantic processing of nouns. Specifically; this study can help to interpret the category specific activations observed extensively in a large number of neuroimaging studies and clinical studies.Keywords: nouns, features, network, category specificity
Procedia PDF Downloads 52111658 Rethinking the History of an Expanding City through Its Images: Birmingham, England, the Nineteenth Century
Authors: Lin Chang
Abstract:
Birmingham, England was a town in the late-eighteenth century and became the nation’s second largest city in the late nineteenth century. The city expanded rapidly in terms of its population and size. Three generations of artists from a local family, the Lines, made a large number of drawings and paintings depicting the growth and changes of their city. At first sight, the meaning of the pictures seems straight-forward: providing records of what were torn down and newly-built. However, except for being read as maps, the pictures reveal a struggle in vision as to whether unsightly manufactories and their smoking chimneys should be visualized and how far the borders of the town should have been positioned and understood as they continued to grow and encroached upon its immediate countryside. This art-historic paper examines some topographic views by the Lines family and explores how they, through unusual depiction of rural and urban scenery, manage to give form to the borderlands between the country and the city. This paper argues that while the idea of the country and the city seems to be common sense, the two realms actually pose difficulty for visual representation as to where exactly their borders are and the idea itself has dichotomized the way people consider landscape imageries to be.Keywords: Birmingham, suburb, urban fringes, landscape
Procedia PDF Downloads 19711657 The Determination of Self-Esteem, Life Satisfaction, Anxiety and Depression Levels among Patients with Stoma
Authors: Tugba Cinarli, Tugba Kavalali Erdogan, Sevil Masat, Dilek Kiymaz, Nida Kiyici, Zeliha Koc
Abstract:
This study was conducted in a descriptive and cross-sectional manner, in order to determine the self-esteem, life satisfaction and depression/anxiety levels of the patients with stoma. The study was conducted between June 15, 2016 and June 15, 2017 among 196 oncology patients that were hospitalized in the general surgery clinic of a public hospital in Turkey. The case group consisted of 98 cancer patients with stoma and the control group consisted of 98 cancer patients without stoma. The data were collected through the Coopersmith Self-Esteem Scale, Life Satisfaction Scale, the Hospital Anxiety and Depression Scale, and a 21-question survey that aimed to determine the sociodemographic and clinical properties of the patients. The data were analyzed with percentage analysis, Mann Whitney U-test, Chi-square test and Spearmen’s correlation test. It was determined that for the case group; 44.9% had colon cancer, 29.6% had rectal cancer; 50% underwent temporary colostomia, 15.3% underwent permanent colostomia, 34.7% underwent temporary ileostomy. The experimental group's findings for the Coopersmith Self-Esteem Scale, Life Satisfaction Scale, the Anxiety Subscale and the Depression subscale were 64 (20 - 84), 17 (5 - 38), 10 (1 - 18), and 9 (1 - 19), respectively. The control group's findings for the Coopersmith Self-Esteem Scale, Life Satisfaction Scale, the Anxiety Subscale and the Depression Subscale were 68 (32 - 92), 21 (7 - 31), 8.5 (1 - 18), and 8 (1 - 18), respectively. It was found that the Coopersmith Self-Esteem Scale, Life Satisfaction Scale, and the Anxiety Subscale findings were significantly different for the experimental and control groups (p<0.05). It was determined that the self-esteem levels were positively correlated with life satisfaction and negatively correlated with anxiety and depression; also, the life satisfaction levels were negatively correlated with anxiety and depression. It is suggested that the nursing interventions should be planned in order to improve life-satisfaction and self-esteem levels of the patients, and to decrease depression and anxiety.Keywords: anxiety, cancer, life satisfaction, self-esteem
Procedia PDF Downloads 17311656 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: building stock energy modelling, energy-savings, archetype
Procedia PDF Downloads 15411655 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 47811654 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 6311653 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant
Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha
Abstract:
This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.Keywords: PV, oscillation, modelling, wind
Procedia PDF Downloads 3711652 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach
Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta
Abstract:
Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.Keywords: compostela, literary geography, literary cartography, tourism
Procedia PDF Downloads 39211651 High Level Expression of Fluorinase in Escherichia Coli and Pichia Pastoris
Authors: Lee A. Browne, K. Rumbold
Abstract:
The first fluorinating enzyme, 5'-fluoro-5'-deoxyadenosine synthase (fluorinase) was isolated from the soil bacterium Streptomyces cattleya. Such an enzyme, with the ability to catalyze a C-F bond, presents great potential as a biocatalyst. Naturally fluorinated compounds are extremely rare in nature. As a result, the number of fluorinases identified remains relatively few. The field of fluorination is almost completely synthetic. However, with the increasing demand for fluorinated organic compounds of commercial value in the agrochemical, pharmaceutical and materials industries, it has become necessary to utilize biologically based methods such as biocatalysts. A key step in this crucial process is the large-scale production of the fluorinase enzyme in considerable quantities for industrial applications. Thus, this study aimed to optimize expression of the fluorinase enzyme in both prokaryotic and eukaryotic expression systems in order to obtain high protein yields. The fluorinase gene was cloned into the pET 41b(+) and pPinkα-HC vectors and used to transform the expression hosts, E.coli BL21(DE3) and Pichia pastoris (PichiaPink™ strains) respectively. Expression trials were conducted to select optimal conditions for expression in both expression systems. Fluorinase catalyses a reaction between S-adenosyl-L-Methionine (SAM) and fluoride ion to produce 5'-fluorodeoxyadenosine (5'FDA) and L-Methionine. The activity of the enzyme was determined using HPLC by measuring the product of the reaction 5'FDA. A gradient mobile phase of 95:5 v/v 50mM potassium phosphate buffer to a final mobile phase containing 80:20 v/v 50mM potassium phosphate buffer and acetonitrile were used. This resulted in the complete separation of SAM and 5’-FDA which eluted at 1.3 minutes and 3.4 minutes respectively. This proved that the fluorinase enzyme was active. Optimising expression of the fluorinase enzyme was successful in both E.coli and PichiaPink™ where high expression levels in both expression systems were achieved. Protein production will be scaled up in PichiaPink™ using fermentation to achieve large-scale protein production. High level expression of protein is essential in biocatalysis for the availability of enzymes for industrial applications.Keywords: biocatalyst, expression, fluorinase, PichiaPink™
Procedia PDF Downloads 55211650 Global Differences in Job Satisfaction of Healthcare Professionals
Authors: Jonathan H. Westover, Ruthann Cunningham, Jaron Harvey
Abstract:
Purpose: Job satisfaction is one of the most critical attitudes among employees. Understanding whether employees are satisfied with their jobs and what is driving that satisfaction is important for any employer, but particularly for healthcare organizations. This study looks at the question of job satisfaction and drivers of job satisfaction among healthcare professionals at a global scale, looking for trends that generalize across 37 countries. Study: This study analyzed job satisfaction responses to the 2015 Work Orientations IV wave of the International Social Survey Programme (ISSP) to understand differences in antecedents for and levels of job satisfaction among healthcare professionals. A total of 18,716 respondents from 37 countries participated in the annual survey. Findings: Respondents self-identified their occupational category based on corresponding International Standard Classification of Occupations (ISCO-08) codes. Results suggest that mean overall job satisfaction was highest among health service managers and generalist medical practitioners and lowest among environmental hygiene professionals and nursing professionals. Originality: Many studies have addressed the issue of job satisfaction in healthcare, examining small samples of specific healthcare workers. In this study, using a large international dataset, we are able to examine questions of job satisfaction across large groups of healthcare workers in different occupations within the healthcare field.Keywords: job satisfaction, healthcare industry, global comparisons, workplace
Procedia PDF Downloads 145