Search results for: sequential confidence estimation
2767 PostureCheck with the Kinect and Proficio: Posture Modeling for Exercise Assessment
Authors: Elham Saraee, Saurabh Singh, Margrit Betke
Abstract:
Evaluation of a person’s posture while exercising is important in physical therapy. During a therapy session, a physical therapist or a monitoring system must assure that the person is performing an exercise correctly to achieve the desired therapeutic effect. In this work, we introduce a system called POSTURECHECK for exercise assessment in physical therapy. POSTURECHECK assesses the posture of a person who is exercising with the Proficio robotic arm while being recorded by the Microsoft Kinect interface. POSTURECHECK extracts unique features from the person’s upper body during the exercise, and classifies the sequence of postures as correct or incorrect using Bayesian estimation and majority voting. If POSTURECHECK recognizes an incorrect posture, it specifies what the user can do to correct it. The result of our experiment shows that POSTURECHECK is capable of recognizing the incorrect postures in real time while the user is performing an exercise.Keywords: Bayesian estimation, majority voting, Microsoft Kinect, PostureCheck, Proficio robotic arm, upper body physical therapy
Procedia PDF Downloads 2822766 Comparison of Petrophysical Relationship for Soil Water Content Estimation at Peat Soil Area Using GPR Common-Offset Measurements
Authors: Nurul Izzati Abd Karim, Samira Albati Kamaruddin, Rozaimi Che Hasan
Abstract:
The appropriate petrophysical relationship is needed for Soil Water Content (SWC) estimation especially when using Ground Penetrating Radar (GPR). Ground penetrating radar is a geophysical tool that provides indirectly the parameter of SWC. This paper examines the performance of few published petrophysical relationships to obtain SWC estimates from in-situ GPR common- offset survey measurements with gravimetric measurements at peat soil area. Gravimetric measurements were conducted to support of GPR measurements for the accuracy assessment. Further, GPR with dual frequencies (250MHhz and 700MHz) were used in the survey measurements to obtain the dielectric permittivity. Three empirical equations (i.e., Roth’s equation, Schaap’s equation and Idi’s equation) were selected for the study, used to compute the soil water content from dielectric permittivity of the GPR profile. The results indicate that Schaap’s equation provides strong correlation with SWC as measured by GPR data sets and gravimetric measurements.Keywords: common-offset measurements, ground penetrating radar, petrophysical relationship, soil water content
Procedia PDF Downloads 2522765 Estimation of Mobility Parameters and Threshold Voltage of an Organic Thin Film Transistor Using an Asymmetric Capacitive Test Structure
Authors: Rajesh Agarwal
Abstract:
Carrier mobility at the organic/insulator interface is essential to the performance of organic thin film transistors (OTFT). The present work describes estimation of field dependent mobility (FDM) parameters and the threshold voltage of an OTFT using a simple, easy to fabricate two terminal asymmetric capacitive test structure using admittance measurements. Conventionally, transfer characteristics are used to estimate the threshold voltage in an OTFT with field independent mobility (FIDM). Yet, this technique breaks down to give accurate results for devices with high contact resistance and having field dependent mobility. In this work, a new technique is presented for characterization of long channel organic capacitor (LCOC). The proposed technique helps in the accurate estimation of mobility enhancement factor (γ), the threshold voltage (V_th) and band mobility (µ₀) using capacitance-voltage (C-V) measurement in OTFT. This technique also helps to get rid of making short channel OTFT or metal-insulator-metal (MIM) structures for making C-V measurements. To understand the behavior of devices and ease of analysis, transmission line compact model is developed. The 2-D numerical simulation was carried out to illustrate the correctness of the model. Results show that proposed technique estimates device parameters accurately even in the presence of contact resistance and field dependent mobility. Pentacene/Poly (4-vinyl phenol) based top contact bottom-gate OTFT’s are fabricated to illustrate the operation and advantages of the proposed technique. Small signal of frequency varying from 1 kHz to 5 kHz and gate potential ranging from +40 V to -40 V have been applied to the devices for measurement.Keywords: capacitance, mobility, organic, thin film transistor
Procedia PDF Downloads 1652764 Communication Training about Depression and Suicide Prevention for Pharmacists: A Hungarian Pilot Study
Authors: Mónika Ditta Tóth, Ádám Fritz, Balázs Hankó, György Purebl
Abstract:
Communication training about depression and suicide prevention for pharmacists – A Hungarian pilot study Mónika Ditta Tóth1, Ádám Fritz2, Balázs Hankó2, György Purebl1 1: Semmelweis University, Institute of Behavioural Sciences 2: Semmelweis University, University Pharmacy Department of Pharmacy Administration Background: Suicide rates in Hungary have been one of the highest in the European Union. Depression is one of the main risk factors for suicide and recognizing and treating depression is an effective way to prevent suicidal behaviour. In their daily practice, pharmacists meet patients with high risk of mental health problems. Therefore they have a key role in the prevention of depression and suicide. Aim: The main aim of this study is to raise pharmacists’ awareness about depression and suicide to enable better recognation of verbal and non-verbal signs of these deseases. Another important objective is to reduce their stigma about depression and increase their confidence in communication with depressed and/or suicidal patients. Methods: A 3-hour communication workshop has been delivered in this pilot study about the reasons, trigger factors, verbal and non-verbal signs of depression and suicide. The training includes communication techniques which have been developed to patients needs, as well as role-playing scenarios. Depression Stigma and Morris Confidence Scales were applied before, after and 6 weeks following the training. The results of the training group are then compared with two of the following pharmacist groups: 1. written material only (N=15), 2. no material (N=15). Results: One-way ANOVA revealed significant differences in the training group regarding the level of confidence in treating and communicating with patients with depression and/or suicide following the training, and after 6 weeks (F(2, 24)= 7,135, p=,004; baseline: 20,37, after training: 30,00, follow up: 27,66). After the 3-hour workshop the personal stigma about depression decreased (baselin: 19,75 after training: 17,00, p=0,075) in the training group (N=9), whilst the perceived stigma did not change (before: 33.54, after: 33,44, p=NS). Trainees assessed the workshop as ‘useful’ and ‘gap filling’. No significant differences was found in the group of pharmacisists who got written material only. Conclusions: Despite the high rates of depression and suicide in Hungary, pharmacists do not receive lectures or seminars about mental health during their university studies. Such half-day workshops could fill this gap and give practical help to recognize and communicate with depressed and/or suicidal patients in a more effective way. This way pharmacists, as community gate-keepers, could contribute to a more effective suicide prevention program in Hungary.Keywords: communication training, pharmacists, depression, suicide
Procedia PDF Downloads 1862763 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application
Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz
Abstract:
Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation
Procedia PDF Downloads 1032762 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 4732761 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method
Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih
Abstract:
Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style
Procedia PDF Downloads 2842760 Approximating Maximum Speed on Road from Curvature Information of Bezier Curve
Authors: M. Yushalify Misro, Ahmad Ramli, Jamaludin M. Ali
Abstract:
Bezier curves have useful properties for path generation problem, for instance, it can generate the reference trajectory for vehicles to satisfy the path constraints. Both algorithms join cubic Bezier curve segment smoothly to generate the path. Some of the useful properties of Bezier are curvature. In mathematics, the curvature is the amount by which a geometric object deviates from being flat, or straight in the case of a line. Another extrinsic example of curvature is a circle, where the curvature is equal to the reciprocal of its radius at any point on the circle. The smaller the radius, the higher the curvature thus the vehicle needs to bend sharply. In this study, we use Bezier curve to fit highway-like curve. We use the different approach to finding the best approximation for the curve so that it will resemble highway-like curve. We compute curvature value by analytical differentiation of the Bezier Curve. We will then compute the maximum speed for driving using the curvature information obtained. Our research works on some assumptions; first the Bezier curve estimates the real shape of the curve which can be verified visually. Even, though, the fitting process of Bezier curve does not interpolate exactly on the curve of interest, we believe that the estimation of speed is acceptable. We verified our result with the manual calculation of the curvature from the map.Keywords: speed estimation, path constraints, reference trajectory, Bezier curve
Procedia PDF Downloads 3752759 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 1792758 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 422757 The Impact of Board Characteristics on Firm Performance: Evidence from Banking Industry in India
Authors: Manmeet Kaur, Madhu Vij
Abstract:
The Board of Directors in a firm performs the primary role of an internal control mechanism. This Study seeks to understand the relationship between internal governance and performance of banks in India. The research paper investigates the effect of board structure (proportion of nonexecutive directors, gender diversity, board size and meetings per year) on the firm performance. This paper evaluates the impact of corporate governance mechanisms on bank’s financial performance using panel data for 28 listed banks in National Stock Exchange of India for the period of 2008-2014. Returns on Asset, Return on Equity, Tobin’s Q and Net Interest Margin were used as the financial performance indicators. To estimate the relationship among governance and bank performance initially the Study uses Pooled Ordinary Least Square (OLS) Estimation and Generalized Least Square (GLS) Estimation. Then a well-developed panel Generalized Method of Moments (GMM) Estimator is developed to investigate the dynamic nature of performance and governance relationship. The Study empirically confirms that two-step system GMM approach controls the problem of unobserved heterogeneity and endogeneity as compared to the OLS and GLS approach. The result suggests that banks with small board, boards with female members, and boards that meet more frequently tend to be more efficient and subsequently have a positive impact on performance of banks. The study offers insights to policy makers interested in enhancing the quality of governance of banks in India. Also, the findings suggest that board structure plays a vital role in the improvement of corporate governance mechanism for financial institutions. There is a need to have efficient boards in banks to improve the overall health of the financial institutions and the economic development of the country.Keywords: board of directors, corporate governance, GMM estimation, Indian banking
Procedia PDF Downloads 2602756 Count Data Regression Modeling: An Application to Spontaneous Abortion in India
Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan
Abstract:
Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression
Procedia PDF Downloads 1552755 Production Factor Coefficients Transition through the Lens of State Space Model
Authors: Kanokwan Chancharoenchai
Abstract:
Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.Keywords: autoregressive model, economic growth, state space model, Thailand
Procedia PDF Downloads 1512754 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence
Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno
Abstract:
Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index
Procedia PDF Downloads 1682753 Wind Energy Resources Assessment and Micrositting on Different Areas of Libya: The Case Study in Darnah
Authors: F. Ahwide, Y. Bouker, K. Hatem
Abstract:
This paper presents long term wind data analysis in terms of annual and diurnal variations at different areas of Libya. The data of the wind speed and direction are taken each ten minutes for a period, at least two years, are used in the analysis. ‘WindPRO’ software and Excel workbook were used for the wind statistics and energy calculations. As for Derna, average speeds are 10 m, 20 m, and 40 m, and respectively 6.57 m/s, 7.18 m/s, and 8.09 m/s. Highest wind speeds are observed at SSW, followed by S, WNW and NW sectors. Lowest wind speeds are observed between N and E sectors. Most frequent wind directions are NW and NNW. Hence, wind turbines can be installed against these directions. The most powerful sector is NW (29.4 % of total expected wind energy), followed by 19.9 % SSW, 11.9% NNW, 8.6% WNW and 8.2% S. Furthermore in Al-Maqrun: the most powerful sector is W (26.8 % of total expected wind energy), followed by 12.3 % WSW and 9.5% WNW. While in Goterria: the most powerful sector is S (14.8 % of total expected wind energy), followed by SSE, SE, and WSW. And Misalatha: the most powerful sector is S, by far represents 28.5% of the expected power, followed by SSE and SE. As for Tarhuna, it is by far SSE and SE, representing each one two times the expected energy of the third powerful sector (NW). In Al-Asaaba: it is SSE by far represents 50% of the expected power, followed by S. It can to be noted that the high frequency of the south direction winds, that come from the desert could cause a high frequency of dust episodes. This fact then, should be taken into account in order to take appropriate measures to prevent wind turbine deterioration. In Excel workbook, an estimation of annual energy yield at position of Derna, Al-Maqrun, Tarhuna, and Al-Asaaba meteorological mast has been done, considering a generic wind turbine of 1.65 MW. (mtORRES, TWT 82-1.65MW) in position of meteorological mast. Three other turbines have been tested. At 80 m, the estimation of energy yield for Derna, Al-Maqrun, Tarhuna, and Asaaba is 6.78 GWh or 3390 equivalent hours, 5.80 GWh or 2900 equivalent hours, 4.91 GWh or 2454 equivalent hours and 5.08 GWh or 2541 equivalent hours respectively. It seems a fair value in the context of a possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Furthermore, an estimation of annual energy yield at positions of Misalatha, Azizyah and Goterria meteorological mast has been done, considering a generic wind turbine of 2 MW. We found that, at 80 m, the estimation of energy yield is 3.12 GWh or 1557 equivalent hours, 4.47 GWh or 2235 equivalent hours and 4.07GWh or 2033 respectively . It seems a very poor value in the context of possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Anyway, more data and a detailed wind farm study would be necessary to draw conclusions.Keywords: wind turbines, wind data, energy yield, micrositting
Procedia PDF Downloads 1872752 In vitro Estimation of Genotoxic Lesions in Peripheral Blood Lymphocytes of Rat Exposed to Organophosphate Pesticides
Authors: A. Ojha, Y. K. Gupta
Abstract:
Organophosphate (OP) pesticides are among the most widely used synthetic chemicals for controlling a wide variety of pests throughout the world. Chlorpyrifos (CPF), methyl parathion (MPT), and malathion (MLT) are among the most extensively used OP pesticides in India. DNA strand breaks and DNA-protein crosslinks (DPC) are toxic lesions associated with the mechanisms of toxicity of genotoxic compounds. In the present study, we have examined the potential of CPF, MPT, and MLT individually and in combination, to cause DNA strand breakage and DPC formation. Peripheral blood lymphocytes of rat were exposed to 1/4 and 1/10 LC50 dose of CPF, MPT, and MLT for 2, 4, 8, and 12h. The DNA strand break was measured by the comet assay and expressed as DNA damage index while DPC estimation was done by fluorescence emission. There was significantly marked increase in DNA damage and DNA-protein crosslink formation in time and dose dependent manner. It was also observed that MPT caused the highest level of DNA damage as compared to other studied OP compounds. Thus, from present study, we can conclude that studied pesticides have genotoxic potential. The pesticides mixture does not potentiate the toxicity of each other. Nonetheless, additional in vivo data are required before a definitive conclusion can be drawn regarding hazard prediction to humans.Keywords: organophosphate, pesticides, DNA damage, DNA protein crosslink, genotoxic
Procedia PDF Downloads 3562751 Parameter Estimation for the Oral Minimal Model and Parameter Distinctions Between Obese and Non-obese Type 2 Diabetes
Authors: Manoja Rajalakshmi Aravindakshana, Devleena Ghosha, Chittaranjan Mandala, K. V. Venkateshb, Jit Sarkarc, Partha Chakrabartic, Sujay K. Maity
Abstract:
Oral Glucose Tolerance Test (OGTT) is the primary test used to diagnose type 2 diabetes mellitus (T2DM) in a clinical setting. Analysis of OGTT data using the Oral Minimal Model (OMM) along with the rate of appearance of ingested glucose (Ra) is performed to study differences in model parameters for control and T2DM groups. The differentiation of parameters of the model gives insight into the behaviour and physiology of T2DM. The model is also studied to find parameter differences among obese and non-obese T2DM subjects and the sensitive parameters were co-related to the known physiological findings. Sensitivity analysis is performed to understand changes in parameter values with model output and to support the findings, appropriate statistical tests are done. This seems to be the first preliminary application of the OMM with obesity as a distinguishing factor in understanding T2DM from estimated parameters of insulin-glucose model and relating the statistical differences in parameters to diabetes pathophysiology.Keywords: oral minimal model, OGTT, obese and non-obese T2DM, mathematical modeling, parameter estimation
Procedia PDF Downloads 922750 Leveraging Deep Q Networks in Portfolio Optimization
Authors: Peng Liu
Abstract:
Deep Q networks (DQNs) represent a significant advancement in reinforcement learning, utilizing neural networks to approximate the optimal Q-value for guiding sequential decision processes. This paper presents a comprehensive introduction to reinforcement learning principles, delves into the mechanics of DQNs, and explores its application in portfolio optimization. By evaluating the performance of DQNs against traditional benchmark portfolios, we demonstrate its potential to enhance investment strategies. Our results underscore the advantages of DQNs in dynamically adjusting asset allocations, offering a robust portfolio management framework.Keywords: deep reinforcement learning, deep Q networks, portfolio optimization, multi-period optimization
Procedia PDF Downloads 322749 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models
Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif
Abstract:
This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function
Procedia PDF Downloads 3952748 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods
Authors: Getalem E. Haylia
Abstract:
The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model
Procedia PDF Downloads 1872747 Deep Learning Based Fall Detection Using Simplified Human Posture
Authors: Kripesh Adhikari, Hamid Bouchachia, Hammadi Nait-Charif
Abstract:
Falls are one of the major causes of injury and death among elderly people aged 65 and above. A support system to identify such kind of abnormal activities have become extremely important with the increase in ageing population. Pose estimation is a challenging task and to add more to this, it is even more challenging when pose estimations are performed on challenging poses that may occur during fall. Location of the body provides a clue where the person is at the time of fall. This paper presents a vision-based tracking strategy where available joints are grouped into three different feature points depending upon the section they are located in the body. The three feature points derived from different joints combinations represents the upper region or head region, mid-region or torso and lower region or leg region. Tracking is always challenging when a motion is involved. Hence the idea is to locate the regions in the body in every frame and consider it as the tracking strategy. Grouping these joints can be beneficial to achieve a stable region for tracking. The location of the body parts provides a crucial information to distinguish normal activities from falls.Keywords: fall detection, machine learning, deep learning, pose estimation, tracking
Procedia PDF Downloads 1892746 Technological Innovations and African Export Performances
Authors: Lukman Oyelami
Abstract:
Studies have identified trade as a veritable tool for inclusive economic growth and poverty reduction in developing countries. However, contrary to the overwhelming pieces of evidence of the Asian tiger as a success story of beneficial trade, many African countries still experience poverty unabatedly despite active engagement in trade. Consequently, this study seeks to investigate the contributory effect of technological innovation on total export performance and specifically manufacturing exports of African countries. This is with a view to exploring manufacturing exports as a viable option for diversification. To achieve the empirical investigation this study, require Systems Generalized Method of Moments (sys-GMM) estimation technique was adopted based on the econometric realities inherent in the data utilized. However, the static technique of panel estimation of the Fixed Effects (FE) model was utilized for baseline analysis and robustness check. The conclusion from this study is that innovation generally impacts export performance of African countries positively, however, manufacturing export shows more sensitivity to innovation than total export. And, this provides a clear pathway for export diversification for many African countries that run a resource-based economy.Keywords: innovation, export, GMM, Africa
Procedia PDF Downloads 2202745 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area
Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim
Abstract:
In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.Keywords: data estimation, link data, machine learning, road network
Procedia PDF Downloads 5102744 Formulation of Extended-Release Gliclazide Tablet Using a Mathematical Model for Estimation of Hypromellose
Authors: Farzad Khajavi, Farzaneh Jalilfar, Faranak Jafari, Leila Shokrani
Abstract:
Formulation of gliclazide in the form of extended-release tablet in 30 and 60 mg dosage forms was performed using hypromellose (HPMC K4M) as a retarding agent. Drug-release profiles were investigated in comparison with references Diamicron MR 30 and 60 mg tablets. The effect of size of powder particles, the amount of hypromellose in formulation, hardness of tablets, and also the effect of halving the tablets were investigated on drug release profile. A mathematical model which describes hypromellose behavior in initial times of drug release was proposed for the estimation of hypromellose content in modified-release gliclazide 60 mg tablet. This model is based on erosion of hypromellose in dissolution media. The model is applicable to describe release profiles of insoluble drugs. Therefore, by using dissolved amount of drug in initial times of dissolution and the model, the amount of hypromellose in formulation can be predictable. The model was used to predict the HPMC K4M content in modified-release gliclazide 30 mg and extended-release quetiapine 200 mg tablets.Keywords: Gliclazide, hypromellose, drug release, modified-release tablet, mathematical model
Procedia PDF Downloads 2222743 Downtime Estimation of Building Structures Using Fuzzy Logic
Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam
Abstract:
Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment
Procedia PDF Downloads 1602742 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration
Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis
Abstract:
Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses
Procedia PDF Downloads 1822741 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition
Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman
Abstract:
Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat
Procedia PDF Downloads 1462740 Analysis of the Predictive Performance of Value at Risk Estimations in Times of Financial Crisis
Authors: Alexander Marx
Abstract:
Measuring and mitigating market risk is essential for the stability of enterprises, especially for major banking corporations and investment bank firms. To employ these risk measurement and mitigation processes, the Value at Risk (VaR) is the most commonly used risk metric by practitioners. In the past years, we have seen significant weaknesses in the predictive performance of the VaR in times of financial market crisis. To address this issue, the purpose of this study is to investigate the value-at-risk (VaR) estimation models and their predictive performance by applying a series of backtesting methods on the stock market indices of the G7 countries (Canada, France, Germany, Italy, Japan, UK, US, Europe). The study employs parametric, non-parametric, and semi-parametric VaR estimation models and is conducted during three different periods which cover the most recent financial market crisis: the overall period (2006–2022), the global financial crisis period (2008–2009), and COVID-19 period (2020–2022). Since the regulatory authorities have introduced and mandated the Conditional Value at Risk (Expected Shortfall) as an additional regulatory risk management metric, the study will analyze and compare both risk metrics on their predictive performance.Keywords: value at risk, financial market risk, banking, quantitative risk management
Procedia PDF Downloads 942739 A Case Study Approach on Co-Constructing the Idea of 'Safety' with Children
Authors: Beng Zhen Yeow
Abstract:
In most work that involves children, the voice of the children is often not heard. This is ironic since a lot of discussions might involve their welfare and safety. It might seem natural that the professionals should hear from them about what they wish for instead of deciding what is best for them. However, this, unfortunately, might be more the exception than the norm in most case and hence in many instances, children are merely 'subjects' in conversations about safety instead of active participants in the construction or creation of safety in the family. There might be many reasons why it does not happen in our work. Firstly, professionals have learnt how to 'socialise' into their professional roles and hence in the process become 'un-childlike'. Secondly, there is also a lack of professional training with regards to how to talk with children. Finally, there might be also a lack of concrete tools and techniques that are developed to facilitate the process. In this paper, the case study method is used to show how the idea of safety could be concretised and discussed with children and their family members, and hence making them active participants and co-creators of their own safety. Specific skills and techniques are highlighted through the case study. In this case, there was improvement in outcomes like no repeated offence or abuse. In addition, children were also able to advocate for their own safety after six months of intervention and how the family members were able to explicitly say what they can do to improve safety. The professionals in the safety network reported significant improvements. On top of that, the abused child who was removed due to child protection concerns, had verbalized observations of change in mother’s parenting abilities, and has requested for home leave to begin due to ownership of safety planning and having confidence to co-create safety for her siblings and herself together with the professionals in the safety network. Children becoming active participants in the co-creation of safety not only serve the purpose in allowing them to own a 'voice' but at the same time, give them greater confidence to protect themselves at home and in other contexts outside of home.Keywords: partnering for safety, collaborative social work, family and systemic psychotherapy, child protection
Procedia PDF Downloads 1202738 Repeatable Scalable Business Models: Can Innovation Drive an Entrepreneurs Un-Validated Business Model?
Authors: Paul Ojeaga
Abstract:
Can the level of innovation use drive un-validated business models across regions? To what extent does industrial sector attractiveness drive firm’s success across regions at the time of start-up? This study examines the role of innovation on start-up success in six regions of the world (namely Sub Saharan Africa, the Middle East and North Africa, Latin America, South East Asia Pacific, the European Union and the United States representing North America) using macroeconomic variables. While there have been studies using firm level data, results from such studies are not suitable for national policy decisions. The need to drive a regional innovation policy also begs for an answer, therefore providing room for this study. Results using dynamic panel estimation show that innovation counts in the early infancy stage of new business life cycle. The results are robust even after controlling for time fixed effects and the study present variance-covariance estimation robust standard errors.Keywords: industrial economics, un-validated business models, scalable models, entrepreneurship
Procedia PDF Downloads 281