Search results for: panel data method
35722 Analysis of the Engineering Judgement Influence on the Selection of Geotechnical Parameters Characteristic Values
Authors: K. Ivandic, F. Dodigovic, D. Stuhec, S. Strelec
Abstract:
A characteristic value of certain geotechnical parameter results from an engineering assessment. Its selection has to be based on technical principles and standards of engineering practice. It has been shown that the results of engineering assessment of different authors for the same problem and input data are significantly dispersed. A survey was conducted in which participants had to estimate the force that causes a 10 cm displacement at the top of a axially in-situ compressed pile. Fifty experts from all over the world took part in it. The lowest estimated force value was 42% and the highest was 133% of measured force resulting from a mentioned static pile load test. These extreme values result in significantly different technical solutions to the same engineering task. In case of selecting a characteristic value of a geotechnical parameter the importance of the influence of an engineering assessment can be reduced by using statistical methods. An informative annex of Eurocode 1 prescribes the method of selecting the characteristic values of material properties. This is followed by Eurocode 7 with certain specificities linked to selecting characteristic values of geotechnical parameters. The paper shows the procedure of selecting characteristic values of a geotechnical parameter by using a statistical method with different initial conditions. The aim of the paper is to quantify an engineering assessment in the example of determining a characteristic value of a specific geotechnical parameter. It is assumed that this assessment is a random variable and that its statistical features will be determined. For this purpose, a survey research was conducted among relevant experts from the field of geotechnical engineering. Conclusively, the results of the survey and the application of statistical method were compared.Keywords: characteristic values, engineering judgement, Eurocode 7, statistical methods
Procedia PDF Downloads 29635721 Analytical Formulae for Parameters Involved in Side Slopes of Embankments Stability
Authors: Abdulrahman Abdulrahman, Abir Abdulrahman
Abstract:
The stability of slopes of earthen embankments is usually examined by Swedish slip circle method or the slices method. The factor of safety against sliding using Fellenius procedure depends upon the angle formed by the arc of sliding at the center ψ and the radius of the slip circle r. The values of both mentioned parameters ψ and r aren't precisely predicted because they are measured from the drawing. In this paper, analytical formulae were derived for finding the exact values of both ψ and r. Also this paper presents the different conditions of intersections the slip circle with the body of an earthen dam and the coordinate of intersection points. Numerical examples are chosen for demonstration the proposed solutionKeywords: earthen dams stability, , earthen embankments stability, , Fellenius method, hydraulic structures, , side slopes stability, , slices method, Swedish slip circle
Procedia PDF Downloads 16535720 Elastohydrodynamic Lubrication Study Using Discontinuous Finite Volume Method
Authors: Prawal Sinha, Peeyush Singh, Pravir Dutt
Abstract:
Problems in elastohydrodynamic lubrication have attracted a lot of attention in the last few decades. Solving a two-dimensional problem has always been a big challenge. In this paper, a new discontinuous finite volume method (DVM) for two-dimensional point contact Elastohydrodynamic Lubrication (EHL) problem has been developed and analyzed. A complete algorithm has been presented for solving such a problem. The method presented is robust and easily parallelized in MPI architecture. GMRES technique is implemented to solve the matrix obtained after the formulation. A new approach is followed in which discontinuous piecewise polynomials are used for the trail functions. It is natural to assume that the advantages of using discontinuous functions in finite element methods should also apply to finite volume methods. The nature of the discontinuity of the trail function is such that the elements in the corresponding dual partition have the smallest support as compared with the Classical finite volume methods. Film thickness calculation is done using singular quadrature approach. Results obtained have been presented graphically and discussed. This method is well suited for solving EHL point contact problem and can probably be used as commercial software.Keywords: elastohydrodynamic, lubrication, discontinuous finite volume method, GMRES technique
Procedia PDF Downloads 25735719 Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion
Authors: Bin Liu, Weijie Liu, Bin Sun, Yihui Luo
Abstract:
In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information.Keywords: image fusion, two-channel sampled nonseparable wavelets, multispectral image, panchromatic image
Procedia PDF Downloads 44035718 Experimental Approach for Determining Hemi-Anechoic Characteristics of Engineering Acoustical Test Chambers
Authors: Santiago Montoya-Ospina, Raúl E. Jiménez-Mejía, Rosa Elvira Correa Gutiérrez
Abstract:
An experimental methodology is proposed for determining hemi-anechoic characteristics of an engineering acoustic room built at the facilities of Universidad Nacional de Colombia to evaluate the free-field conditions inside the chamber. Experimental results were compared with theoretical ones in both, the source and the sound propagation inside the chamber. Acoustic source was modeled by using monopole radiation pattern from punctual sources and the image method was considered for dealing with the reflective plane of the room, that means, the floor without insulation. Finite-difference time-domain (FDTD) method was implemented to calculate the sound pressure value at every spatial point of the chamber. Comparison between theoretical and experimental data yields to minimum error, giving satisfactory results for the hemi-anechoic characterization of the chamber.Keywords: acoustic impedance, finite-difference time-domain, hemi-anechoic characterization
Procedia PDF Downloads 16335717 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System
Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu
Abstract:
Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.Keywords: communication, GEO satellite, data relay system, coverage
Procedia PDF Downloads 44235716 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019
Authors: Rob Leslie, Taher Karimian
Abstract:
The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.Keywords: ARR 2019, blockage, culverts, methodology
Procedia PDF Downloads 36235715 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product
Authors: Tanawat Hongthai, Dusit Thanapatay
Abstract:
This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC
Procedia PDF Downloads 28035714 Construction Technology of Modified Vacuum Pre-Loading Method for Slurry Dredged Soil
Authors: Ali H. Mahfouz, Gao Ming-Jun, Mohamad Sharif
Abstract:
Slurry dredged soil at coastal area has a high water content, poor permeability, and low surface intensity. Hence, it is infeasible to use vacuum preloading method to treat this type of soil foundation. For the special case of super soft ground, a floating bridge is first constructed on muddy soil and used as a service road and platform for implementing the modified vacuum preloading method. The modified technique of vacuum preloading and its construction process for the super soft soil foundation improvement is then studied. Application of modified vacuum preloading method shows that the technology and its construction process are highly suitable for improving the super soft soil foundation in coastal areas.Keywords: super soft foundation, dredger fill, vacuum preloading, foundation treatment, construction technology
Procedia PDF Downloads 60935713 Spatial Patterns of Urban Expansion in Kuwait City between 1989 and 2001
Authors: Saad Algharib, Jay Lee
Abstract:
Urbanization is a complex phenomenon that occurs during the city’s development from one form to another. In other words, it is the process when the activities in the land use/land cover change from rural to urban. Since the oil exploration, Kuwait City has been growing rapidly due to its urbanization and population growth by both natural growth and inward immigration. The main objective of this study is to detect changes in urban land use/land cover and to examine the changing spatial patterns of urban growth in and around Kuwait City between 1989 and 2001. In addition, this study also evaluates the spatial patterns of the changes detected and how they can be related to the spatial configuration of the city. Recently, the use of remote sensing and geographic information systems became very useful and important tools in urban studies because of the integration of them can allow and provide the analysts and planners to detect, monitor and analyze the urban growth in a region effectively. Moreover, both planners and users can predict the trends of the growth in urban areas in the future with remotely sensed and GIS data because they can be effectively updated with required precision levels. In order to identify the new urban areas between 1989 and 2001, the study uses satellite images of the study area and remote sensing technology for classifying these images. Unsupervised classification method was applied to classify images to land use and land cover data layers. After finishing the unsupervised classification method, GIS overlay function was applied to the classified images for detecting the locations and patterns of the new urban areas that developed during the study period. GIS was also utilized to evaluate the distribution of the spatial patterns. For example, Moran’s index was applied for all data inputs to examine the urban growth distribution. Furthermore, this study assesses if the spatial patterns and process of these changes take place in a random fashion or with certain identifiable trends. During the study period, the result of this study indicates that the urban growth has occurred and expanded 10% from 32.4% in 1989 to 42.4% in 2001. Also, the results revealed that the largest increase of the urban area occurred between the major highways after the forth ring road from the center of Kuwait City. Moreover, the spatial distribution of urban growth occurred in cluster manners.Keywords: geographic information systems, remote sensing, urbanization, urban growth
Procedia PDF Downloads 17135712 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit
Procedia PDF Downloads 49935711 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 12235710 Investigation of the Relationship between Physical Activity and Stress and Mental Health in the Elderly
Authors: Mohamad Reza Khodabakhsh
Abstract:
Physical activity is important because it affects the stress and mental health of the elderly. The purpose of this research is to examine the relationship between the physical activity of the elderly and stress and mental health. The current research is correlational research, and the studied population includes all the elderly who are engaged in sports in the parks of Mashhad city in 2021. The whole community consists of 200 people. Sampling was done by the headcount method. The tool used in this research is a questionnaire. The physical activity questionnaire is Likert. General GHQ is based on the self-report method. The study method is correlation type to find the relationship between predictor and predicted variables, and the multiple regression method was used for the relationships between the sub-components. And the results showed that physical activity has the effect of reducing the stress of the elderly and improving their mental health. In general, the results of this research indicate the confirmation of the research hypotheses.Keywords: relationship, physical activity, stress, mental health, elderly
Procedia PDF Downloads 9735709 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort
Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri
Abstract:
Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.Keywords: early treatment, fluency, preschool children, stuttering
Procedia PDF Downloads 21835708 Secure Intelligent Information Management by Using a Framework of Virtual Phones-On Cloud Computation
Authors: Mohammad Hadi Khorashadi Zadeh
Abstract:
Many new applications and internet services have been emerged since the innovation of mobile networks and devices. However, these applications have problems of security, management, and performance in business environments. Cloud systems provide information transfer, management facilities, and security for virtual environments. Therefore, an innovative internet service and a business model are proposed in the present study for creating a secure and consolidated environment for managing the mobile information of organizations based on cloud virtual phones (CVP) infrastructures. Using this method, users can run Android and web applications in the cloud which enhance performance by connecting to other CVP users and increases privacy. It is possible to combine the CVP with distributed protocols and central control which mimics the behavior of human societies. This mix helps in dealing with sensitive data in mobile devices and facilitates data management with less application overhead.Keywords: BYOD, mobile cloud computing, mobile security, information management
Procedia PDF Downloads 31735707 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 12835706 Travel Behavior Simulation of Bike-Sharing System Users in Kaoshiung City
Authors: Hong-Yi Lin, Feng-Tyan Lin
Abstract:
In a Bike-sharing system (BSS), users can easily rent bikes from any station in the city for mid-range or short-range trips. BSS can also be integrated with other types of transport system, especially Green Transportation system, such as rail transport, bus etc. Since BSS records time and place of each pickup and return, the operational data can reflect more authentic and dynamic state of user behaviors. Furthermore, land uses around docking stations are highly associated with origins and destinations for the BSS users. As urban researchers, what concerns us more is to take BSS into consideration during the urban planning process and enhance the quality of urban life. This research focuses on the simulation of travel behavior of BSS users in Kaohsiung. First, rules of users’ behavior were derived by analyzing operational data and land use patterns nearby docking stations. Then, integrating with Monte Carlo method, these rules were embedded into a travel behavior simulation model, which was implemented by NetLogo, an agent-based modeling tool. The simulation model allows us to foresee the rent-return behaviour of BSS in order to choose potential locations of the docking stations. Also, it can provide insights and recommendations about planning and policies for the future BSS.Keywords: agent-based model, bike-sharing system, BSS operational data, simulation
Procedia PDF Downloads 33335705 Psychometric Properties of the Eq-5d-3l and Eq-5d-5l Instruments for Health Related Quality of Life Measurement in Indonesian Population
Authors: Dwi Endarti, Susi a Kristina, Rizki Noorizzati, Akbar E Nugraha, Fera Maharani, Kika a Putri, Asninda H Azizah, Sausanzahra Angganisaputri, Yunisa Yustikarini
Abstract:
Cost utility analysis is the most recommended pharmacoeconomic method since it allows widely comparison of cost-effectiveness results from different interventions. The method uses outcome of quality-adjusted life year (QALY) or disability-adjusted life year (DALY). Measurement of QALY requires the data of utility dan life years gained. Utility is measured with the instrument for quality of life measurement such as EQ-5D. Recently, the EQ-5D is available in two versions which are EQ-5D-3L and EQ-5D-5L. This study aimed to compare the EQ-5D-3L and EQ-5D-5L to examine the most suitable version for Indonesian population. This study was an observational study employing cross sectional approach. Data of quality of life measured with EQ-5D-3L and EQ-5D-5L were collected from several groups of population which were respondent with chronic diseases, respondent with acute diseases, and respondent from general population (without illness) in Yogyakarta Municipality, Indonesia. Convenience samples of hypertension patients (83), diabetes mellitus patients (80), and osteoarthritis patients (47), acute respiratory tract infection (81), cephalgia (43), dyspepsia (42), and respondent from general population (293) were recruited in this study. Responses on the 3L and 5L versions of EQ-5D were compared by examining the psychometric properties including agreement, internal consistency, ceiling effect, and convergent validity. Based on psychometric properties tests of EQ-5D-3L dan EQ-5D-5L, EQ-5D-5L tended to have better psychometric properties compared to EQ-5D-3L. Future studies for health related quality of life (HRQOL) measurements for pharmacoeconomic studies in Indonesia should apply EQ-5D-5L.Keywords: EQ-5D, Health Related Quality of Life, Indonesian Population, Psychometric Properties
Procedia PDF Downloads 47835704 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field
Authors: Yana Snegireva
Abstract:
Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model
Procedia PDF Downloads 7535703 Medical Waste Management in Nigeria: A Case Study
Authors: Y. Y. Babanyara, D. B. Ibrahim, T. Garba
Abstract:
Proper management of medical waste is a crucial issue for maintaining human health and the environment. The waste generated in the hospitals has the potential for spreading infections and causing diseases. The study is aimed at assessing the medical waste management practices in Nigeria. Three instruments, questionnaire administration, in-depth interview and observation method for data collection were adopted in the study. The results revealed that the hospital does not quantify medical waste. Segregation of medical wastes is not conducted according to definite rules and standards. Wheeled trolleys are used for on-site transportation of waste from the points of production to the temporary storage area. Offsite transportation of the hospital waste is undertaken by a private waste management company. Small pickups are mainly used to transport waste daily to an off-site area for treatment and disposal. The main treatment method used in the final disposal of infectious waste is incineration. Non-infectious waste is disposed off using land disposal method. The study showed that the hospital does not have a policy and plan in place for managing medical waste. The study revealed number of problems the hospital faces in terms of medical waste management, including; lack of necessary rules, regulations and instructions on the different aspects of collections and disposal of waste, failure to quantify the waste generated in reliable records, lack of use of coloured bags by limiting the bags to only one colour for all waste, the absence of a dedicated waste manager, and no committee responsible for monitoring the management of medical waste. Recommendations are given with the aim of improving medical waste management in the hospital.Keywords: medical waste, treatment, disposal, public health
Procedia PDF Downloads 31835702 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 13235701 A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images
Authors: Revoti Prasad Bora, Nikita Katyal, Saurabh Yadav
Abstract:
Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques.Keywords: deep-learning, classification, pre-processing, computer vision, image processing, educational data mining
Procedia PDF Downloads 16335700 New Concept for Real Time Selective Harmonics Elimination Based on Lagrange Interpolation Polynomials
Authors: B. Makhlouf, O. Bouchhida, M. Nibouche, K. Laidi
Abstract:
A variety of methods for selective harmonics elimination pulse width modulation have been developed, the most frequently used for real-time implementation based on look-up tables method. To address real-time requirements based in modified carrier signal is proposed in the presented work, with a general formulation to real-time harmonics control/elimination in switched inverters. Firstly, the proposed method has been demonstrated for a single value of the modulation index. However, in reality, this parameter is variable as a consequence of the voltage (amplitude) variability. In this context, a simple interpolation method for calculating the modified sine carrier signal is proposed. The method allows a continuous adjustment in both amplitude and frequency of the fundamental. To assess the performance of the proposed method, software simulations and hardware experiments have been carried out in the case of a single-phase inverter. Obtained results are very satisfactory.Keywords: harmonic elimination, Particle Swarm Optimisation (PSO), polynomial interpolation, pulse width modulation, real-time harmonics control, voltage inverter
Procedia PDF Downloads 50335699 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 9135698 Relationship between Growth of Non-Performing Assets and Credit Risk Management Practices in Indian Banks
Authors: Sirus Sharifi, Arunima Haldar, S. V. D. Nageswara Rao
Abstract:
The study attempts to analyze the impact of credit risk management practices of Indian scheduled commercial banks on their non-performing assets (NPAs). The data on credit risk practices was collected by administering a questionnaire to risk managers/executives at different banks. The data on NPAs (from 2012 to 2016) is sourced from Prowess, a database compiled by the Centre for Monitoring Indian Economy (CMIE). The model was estimated using cross-sectional regression method. As expected, the findings suggest that there is a negative relationship between credit risk management and NPA growth in Indian banks. The study has implications for Indian banks given the high level of losses, and the implementation of Basel III norms by the central bank, i.e. Reserve Bank of India (RBI). Evidence on credit risk management in Indian banks, and their relationship with non-performing assets held by them.Keywords: credit risk, identification, Indian Banks, NPAs, ownership
Procedia PDF Downloads 40835697 Occupational Health Services (OHS) in Hong Kong Hospitals and the Experience of Nurses: A Mixed Methods Study
Authors: Wong Yat Cheung Maggie
Abstract:
Occupational Safety and Health Ordinance (OS&HO) (Chap 509) was enacted in 1997, OHS in HK should be growing and maturing, with a holistic approach to occupational health and safety in the workplace including physical, mental, social and spiritual well-being. The question is “How effective are OHSPs in meeting the current needs of HK health workers?” This study was designed to explore the issue for the first time, to empirically analyse the views of those who work in the system. The study employed a mixed method approach to collect various data from Occupational Health Service Providers (OHSPs), Occupational Health Service Consumers (OHSC): Registered nurses working in the hospital setting. This study was designed in two phases and two stages. Phase I Stage I was a paper survey to collect the data on OHSP. Then Phase I Stage II was a follow-up interview. Phase II Stage I was a paper survey to collect the data on OHSC. Then Phase II Stage II was a follow-up focus group study on OHSC for further clarification of the Phase II and Stage I result. The Phase I result reflects HK OHSPs point of view and their experience in the existing OHS practice in the local hospitals. It reflects various styles of reporting systems, staff profiles background and resource in providing OHS in HK hospitals. However, the basic OHS concern is similar between hospitals. In general, the OHS policies and procedures are available on site even though they may have different foci. The Phase II result is reflecting the HKs OHSCs echoes the OHSP feedback at providing of OHS, OHS concern and related policies and procedure are available on site. However, the most significant feedback from the OHSC at Phase II Stage II shows, nurses experienced various OHS concern most commonly work stress, workplace harassment and back strain without formal or official report to the related parties. The lack of reporting was due to the management handling attitude, stakeholders’ compliance and term of definition still have room to be improved even the related policies and procedures are available on site.Keywords: occupational health service, registered nurse, Hong Kong hospital, mixed method
Procedia PDF Downloads 33235696 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 31535695 Time Series Modelling and Prediction of River Runoff: Case Study of Karkheh River, Iran
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Rainfall and runoff phenomenon is a chaotic and complex outcome of nature which requires sophisticated modelling and simulation methods for explanation and use. Time Series modelling allows runoff data analysis and can be used as forecasting tool. In the paper attempt is made to model river runoff data and predict the future behavioural pattern of river based on annual past observations of annual river runoff. The river runoff analysis and predict are done using ARIMA model. For evaluating the efficiency of prediction to hydrological events such as rainfall, runoff and etc., we use the statistical formulae applicable. The good agreement between predicted and observation river runoff coefficient of determination (R2) display that the ARIMA (4,1,1) is the suitable model for predicting Karkheh River runoff at Iran.Keywords: time series modelling, ARIMA model, river runoff, Karkheh River, CLS method
Procedia PDF Downloads 34135694 A Narrative of Nationalism in Mainstream Media: The US, China, and COVID-19
Authors: Rachel Williams, Shiqi Yang
Abstract:
Our research explores the influence nationalism has had on media coverage of the COVID-19 pandemic as it relates to China in the United States through an inclusive qualitative analysis of two US news networks, Fox News and CNN. In total, the transcripts of sixteen videos uploaded on YouTube, each with more than 100,000 views, were gathered for data processing. Co-occurrence networks generated by KH Coder illuminate the themes and narratives underpinning the reports from Fox News and CNN. The results of in-depth content analysis with keywords suggest that the pandemic has been framed in an ethnopopulist nationalist manner, although to varying degrees between networks. Specifically, the authors found that Fox News is more likely to report hypotheses or statements as a fact; on the contrary, CNN is more likely to quote data and statements from official institutions. Future research into how nationalist narratives have developed in China and in other US news coverage with a more systematic and quantitative method can be conducted to expand on these findings.Keywords: nationalism, media studies, us and china, COVID-19, social media, communication studies
Procedia PDF Downloads 5835693 Human Absorbed Dose Estimation of a New In-111 Imaging Agent Based on Rat Data
Authors: H. Yousefnia, S. Zolghadri
Abstract:
The measurement of organ radiation exposure dose is one of the most important steps to be taken initially, for developing a new radiopharmaceutical. In this study, the dosimetric studies of a novel agent for SPECT-imaging of the bone metastasis, 111In-1,4,7,10-tetraazacyclododecane-1,4,7,10 tetraethylene phosphonic acid (111In-DOTMP) complex, have been carried out to estimate the dose in human organs based on the data derived from rats. The radiolabeled complex was prepared with high radiochemical purity in the optimal conditions. Biodistribution studies of the complex was investigated in the male Syrian rats at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was made based on data derived from the rats by the radiation absorbed dose assessment resource (RADAR) method. 111In-DOTMP complex was prepared with high radiochemical purity of >99% (ITLC). Total body effective absorbed dose for 111In-DOTMP was 0.061 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose with respect to the critical organs is satisfactory within the acceptable range for diagnostic nuclear medicine procedures. Generally, 111In-DOTMP has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastasis in the near future.Keywords: In-111, DOTMP, Internal Dosimetry, RADAR
Procedia PDF Downloads 407