Search results for: generalized Chebyshev polynomial
98 Occasional Word-Formation in Postfeminist Fiction: Cognitive Approach
Authors: Kateryna Nykytchenko
Abstract:
Modern fiction and non-fiction writers commonly use their own lexical and stylistic devices to capture a reader’s attention and bring certain thoughts and feelings to his reader. Among such devices is the appearance of one of the neologic notions – individual author’s formations: occasionalisms or nonce words. To a significant extent, the host of examples of new words occurs in chick lit genre which has experienced exponential growth in recent years. Chick Lit is a new-millennial postfeminist fiction which focuses primarily on twenty- to thirtysomething middle-class women. It brings into focus the image of 'a new woman' of the 21st century who is always fallible, funny. This paper aims to investigate different types of occasional word-formation which reflect cognitive mechanisms of conveying women’s perception of the world. Chick lit novels of Irish author Marian Keyes present genuinely innovative mixture of forms, both literary and nonliterary which is displayed in different types of occasional word-formation processes such as blending, compounding, creative respelling, etc. Crossing existing mental and linguistic boundaries, adopting herself to new and overlapping linguistic spaces, chick lit author creates new words which demonstrate the result of development and progress of language and the relationship between language, thought and new reality, ultimately resulting in hybrid word-formation (e.g. affixation or pseudoborrowing). Moreover, this article attempts to present the main characteristics of chick-lit fiction genre with the help of the Marian Keyes’s novels and their influence on occasionalisms. There has been a lack of research concerning cognitive nature of occasionalisms. The current paper intends to account for occasional word-formation as a set of interconnected cognitive mechanisms, operations and procedures meld together to create a new word. The results of the generalized analysis solidify arguments that the kind of new knowledge an occasionalism manifests is inextricably linked with cognitive procedure underlying it, which results in corresponding type of word-formation processes. In addition, the findings of the study reveal that the necessity of creating occasionalisms in postmodern fiction novels arises from the need to write in a new way keeping up with a perpetually developing world, and thus the evolution of the speaker herself and her perception of the world.Keywords: Chick Lit, occasionalism, occasional word-formation, cognitive linguistics
Procedia PDF Downloads 18097 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 6396 The Impact of Research Anxiety on Research Orientation and Interest in Research Courses in Social Work Students
Authors: Daniel Gredig, Annabelle Bartelsen-Raemy
Abstract:
Social work professionals should underpin their decisions with scientific knowledge and research findings. Hence, research is used as a framework for social work education and research courses have become a taken-for-granted component of study programmes. However, it has been acknowledged that social work students have negative beliefs and attitudes as well as frequently feelings of fear of research courses. Against this background, the present study aimed to establish the relationship between student’s fear of research courses, their research orientation and interest in research courses. We hypothesized that fear predicts the interest in research courses. Further, we hypothesized that research orientation (perceived importance and attributed usefulness for research for social work practice and perceived unbiased nature of research) was a mediating variable. In the years 2014, 2015 and 2016, we invited students enrolled for a bachelor programme in social work in Switzerland to participate in the study during their introduction day to the school taking place two weeks before their programme started. For data collection, we used an anonymous self-administered on-line questionnaire filled in on site. Data were analysed using descriptive statistics and structural equation modelling (generalized least squares estimates method). The sample included 708 students enrolled in a social work bachelor-programme, 501 being female, 184 male, and 5 intersexual, aged 19–56, having various entitlements to study, and registered for three different types of programme modes (full time programme; part time study with field placements in blocks; part time study involving concurrent field placement). Analysis showed that the interest in research courses was predicted by fear of research courses (β = -0.29) as well as by the perceived importance (β = 0.27), attributed usefulness of research (β = 0.15) and perceived unbiased nature of research (β = 0.08). These variables were predicted, in turn, by fear of research courses (β = -0.10, β = -0.23, and β = -0.13). Moreover, interest was predicted by age (β = 0.13). Fear of research courses was predicted by age (β = -0.10) female gender (β = 0.28) and having completed a general baccalaureate (β = -0.09). (GFI = 0.997, AGFI = 0.988, SRMR = 0.016, CMIN/df = 0.946, adj. R2 = 0.312). Findings evidence a direct as well as a mediated impact of fear on the interest in research courses in entering first-year students in a social work bachelor-programme. It highlights one of the challenges social work education in a research framework has to meet with. It seems, there have been considerable efforts to address the research orientation of students. However, these findings point out that, additionally, research anxiety in terms of fear of research courses should be considered and addressed by teachers when conceptualizing research courses.Keywords: research anxiety, research courses, research interest, research orientation, social work students, teaching
Procedia PDF Downloads 18795 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing
Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska
Abstract:
Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.Keywords: learning academic words, writing essays, cognitive load, english as an L2
Procedia PDF Downloads 7294 Q-Efficient Solutions of Vector Optimization via Algebraic Concepts
Authors: Elham Kiyani
Abstract:
In this paper, we first introduce the concept of Q-efficient solutions in a real linear space not necessarily endowed with a topology, where Q is some nonempty (not necessarily convex) set. We also used the scalarization technique including the Gerstewitz function generated by a nonconvex set to characterize these Q-efficient solutions. The algebraic concepts of interior and closure are useful to study optimization problems without topology. Studying nonconvex vector optimization is valuable since topological interior is equal to algebraic interior for a convex cone. So, we use the algebraic concepts of interior and closure to define Q-weak efficient solutions and Q-Henig proper efficient solutions of set-valued optimization problems, where Q is not a convex cone. Optimization problems with set-valued maps have a wide range of applications, so it is expected that there will be a useful analytical tool in optimization theory for set-valued maps. These kind of optimization problems are closely related to stochastic programming, control theory, and economic theory. The paper focus on nonconvex problems, the results are obtained by assuming generalized non-convexity assumptions on the data of the problem. In convex problems, main mathematical tools are convex separation theorems, alternative theorems, and algebraic counterparts of some usual topological concepts, while in nonconvex problems, we need a nonconvex separation function. Thus, we consider the Gerstewitz function generated by a general set in a real linear space and re-examine its properties in the more general setting. A useful approach for solving a vector problem is to reduce it to a scalar problem. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar problem which tends to be an optimization problem with a real valued objective function. The Gerstewitz function is well known and widely used in optimization as the basis of the scalarization. The essential properties of the Gerstewitz function, which are well known in the topological framework, are studied by using algebraic counterparts rather than the topological concepts of interior and closure. Therefore, properties of the Gerstewitz function, when it takes values just in a real linear space are studied, and we use it to characterize Q-efficient solutions of vector problems whose image space is not endowed with any particular topology. Therefore, we deal with a constrained vector optimization problem in a real linear space without assuming any topology, and also Q-weak efficient and Q-proper efficient solutions in the senses of Henig are defined. Moreover, by means of the Gerstewitz function, we provide some necessary and sufficient optimality conditions for set-valued vector optimization problems.Keywords: algebraic interior, Gerstewitz function, vector closure, vector optimization
Procedia PDF Downloads 21593 Feasibility of Online Health Coaching for Canadian Armed Forces Personnel Receiving Treatment for Depression, Anxiety and PTSD
Authors: Noah Wayne, Andrea Tuka, Adrian Norbash, Bryan Garber, Paul Ritvo
Abstract:
Program/Intervention Description: The Canadian Armed Forces(CAF) Mental Health Clinicstreat a full spectrum of mental disorder, addictions, and psychosocial issues that include Major Depressive Disorder, Post-Traumatic Stress Disorder, Generalized Anxiety Disorder, and other diagnoses. We evaluated the feasibility of an online health coach interventiondelivering mindfulness based cognitive behavioral therapy (M-CBT) and behaviour changesupport for individuals receiving treatment at CAF Clinics. Participants were provided accounts on NexJ Connected Wellness, a digital health platform, and 16 weeks of phone-based health coaching,emphasizingmild to moderate aerobic exercise, a healthy diet, and M-CBT content. The primary objective was to assess the feasibility of the online deliverywith CAF members. Evaluation Methods: Feasibility was evaluated in terms of recruitment, engagement, and program satisfaction. Weadditionallyevaluatedhealth behavior change, program completion, and mental health symptoms (i.e. PHQ-9, GAD-7, PCL-5) at three time points. Results: Service members were referred from Vancouver, Esquimalt, and Edmonton CAF bases between August 2020 and January 2021. N=106 CAF personnel were referred, and n=77 consented.N=66 participated, and n=44 completed 4-month and follow-up measures. The platform received a mean rating of76.5 on the System Usability Scale, and health coaching was judged the most helpful program feature (95.2% endorsement), while reminders (53.7%), secure messaging (51.2%), and notifications (51.2%) were also identified. Improvements in mental health status during active interventions were observed on the PHQ-9 (-5.4, p<0.001), GAD-7 (-4.0, p<0.001), and PCL-5 (-4.1, p<0.05). Conclusion: Online health coaching was well-received amidst the COVID-19 pandemic and related lockdowns. Uptake and engagement were positively reported. Participants valuedcontacts and reported strong therapeutic alliances with coaches. Healthy diet, regular exercise, and mindfulness practice are important for physical and mental health. Engagements in these behaviors are associated with reduced symptoms. An online health coach program appears feasible for assisting Canadian Armed Forces personnel.Keywords: coaching, CBT, military, depression, mental health, digital
Procedia PDF Downloads 15992 The Effect of Early Skin-To-Skin Contact with Fathers on Their Supporting Breastfeeding
Authors: Shu-Ling Wang
Abstract:
Background: Multiple studies showed early skin-to-skin contact (SSC) with mothers was beneficial to newborns such as breastfeeding and maternal childcare. In cases of newborns unable to have early SSC with mothers, fathers’ involvement could let early SSC continue without interruption. However, few studies had explored the effects of early SSC by fathers in comparison to early SSC with mothers. Paternal involvement of early SSC should be equally important in term of childcare and breastfeeding. The purpose of this study was to evaluate the efficacy of early SSC by fathers in particular in their support of breastfeeding. Methods: A quasi-experimental design was employed by the study. One hundred and forty-four father-infant pairs had participated the study, in which infants were assigned either to SSC with their fathers (n = 72) or to routine care (n = 72) as the control group. The study was conducted at a regional hospital in northern Taiwan. Participants included parents of both vaginal delivery (VD) and caesarean section birth (CS) infants. To be eligible for inclusion, infants must be over 37-week gestational ages. Data were collected twice: as pretest upon admission and as posttest with online questionnaire during first, second, and third postpartum months. The questionnaire included items for Breastfeeding Social Support, methods of feeding, and the mother-infant 24-hour rooming-in rate. The efficacy of early SSC with fathers was evaluated using the generalized estimating equation (GEE) modeling. Research Result: The primary finding was that SSC with fathers had positive impact on fathers’ support of breastfeeding. Analysis of the online questionnaire indicated that early SSC with fathers improved the support of breastfeeding than the control group (VD: t = -4.98, p < .001; CS: t = -2.37, p = .02). Analysis of mother-infant 24-hour rooming-in rate showed that SSC with fathers after CS had a positive impact on the rooming-in rate (χ² = 5.79, p = .02); however, with VD the difference between early SSC with fathers and the control group was insignificant (χ² = .23, p = .63). Analysis of the rate of exclusive breastfeeding indicated that early SSC with fathers had a higher rate than the control group during first three postpartum months for both delivery methods (VD: χ² = 12.51, p < .001 on 1st postpartum month, χ² = 8.13, p < .05 on 2nd postpartum month, χ² = 4.43, p < .05 on 3rd postpartum month; CS: χ² = 6.92, p < .05 on 1st postpartum month, χ² = 7.41, p < .05 on 2nd postpartum month, χ² = 6.24, p < .05 on 3rd postpartum month). No significant difference was found on the rate of exclusive breastfeeding with both methods of delivery between two groups during hospitalization. (VD: χ² =2 .00, p = .16; CS: χ² = .73, p = .39). Conclusion: Implementing early SSC with fathers has many benefits to both parents. The result of this study showed increasing fathers’ support of breastfeeding. This encourages our nursing personnel to focus the needs of father during breastfeeding, therefore further enhancing the quality of parental care, the rate and duration of breastfeeding.Keywords: breastfeeding, skin-to-skin contact, support of breastfeeding, rooming-in
Procedia PDF Downloads 21491 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa
Authors: Yegnanew A. Shiferaw
Abstract:
Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility
Procedia PDF Downloads 20090 School and Family Impairment Associated with Childhood Anxiety Disorders: Examining Differences in Parent and Child Report
Authors: Melissa K. Hord, Stephen P. Whiteside
Abstract:
Impairment in functioning is a requirement for diagnosing psychopathology, identifying individuals in need of treatment, and documenting improvement with treatment. Further, identifying different types of functional impairment can guide educators and treatment providers. However, most assessment tools focus on symptom severity and few measures assess impairment associated with childhood anxiety disorders. The child- and parent-report versions of the Child Sheehan Disability Scale (CSDS) are measures that may provide useful information regarding impairment. The purpose of the present study is to examine whether children diagnosed with different anxiety disorders have greater impairment in school or home functioning based on self or parent report. The sample consisted of 844 children ages 5 to 19 years of age (mean 13.43, 61% female, 90.9% Caucasian), including 281 children diagnosed with obsessive compulsive disorder (OCD), 200 with generalized anxiety disorder (GAD), 176 with social phobia, 83 with separation anxiety, 61 with anxiety not otherwise specified (NOS), 30 with panic disorder, and 13 with panic with agoraphobia. To assess whether children and parents reported greater impairment in school or home functioning, a multivariate analysis of variance was conducted. (The assumptions of independence and homogeneity of variance were checked and met). A significant difference was found, Pillai's trace = .143, F (4, 28) = 4.19, p < .001, partial eta squared = .04. Post hoc comparisons using the Tukey HSD test indicated that children report significantly greater impairment in school with panic disorder (M=5.18, SD=3.28), social phobia (M=4.95, SD=3.20), and OCD (M=4.62, SD=3.32) compared to other diagnoses; whereas parents endorse significantly greater school impairment when their child has a social phobia (M=5.70, SD=3.39) diagnosis. Interestingly, both children and parents reported greater impairment in family functioning for an OCD (child report M=5.37, SD=3.20; parent report M=5.59, SD=3.38) diagnosis compared to other anxiety diagnoses. (Additional findings for the anxiety disorders associated with less impairment will also be presented). The results of the current study have important implications for educators and treatment providers who are working with anxious children. First, understanding that differences exist in how children and parents view impairment related to childhood anxiety can help those working with these families to be more sensitive during interactions. Second, evidence suggests that difficulties in one environment do not necessarily translate to another environment, thus caregivers may benefit from careful explanation of observations obtained by educators. Third, results support the use of the CSDS measure by treatment providers to identify impairment across environments in order to more effectively target interventions.Keywords: anxiety, childhood, impairment, school functioning
Procedia PDF Downloads 27789 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit
Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.Keywords: cloning free, time machine, teleportation, two-level system
Procedia PDF Downloads 7388 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 150487 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation
Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan
Abstract:
Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method
Procedia PDF Downloads 15986 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 37285 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures
Authors: Rui Teixeira, Alan O’Connor, Maria Nogal
Abstract:
The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data
Procedia PDF Downloads 27284 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa
Authors: Ashenafi Beyene Fanta, Daniel Makina
Abstract:
Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.Keywords: bond market, finance, financial sector, growth
Procedia PDF Downloads 42183 An Assessment of Redevelopment of Cessed Properties in the Island City of Mumbai, India
Authors: Palak Patel
Abstract:
Mumbai is one of the largest cities of the country with a population of 12.44 million over 437 Sq.km, and it is known as financial hub of India. In early 20th century, with the expansion of industrialization and growth of port, a huge demand for housing was created. In response to this, government enacted rent controls. Over a period of time, due to rent controls, the existing rental housing stock has deteriorated. Therefore, in last 25 years, government has been focusing on redevelopment of these rental buildings, also called ‘Cessed buildings’, in order to provide better standard of living to the tenants and also, to supply new housing units in the market. In India, developers are the main players in the housing market as they are the supplier of maximum dwelling units in the market. Hence, government attempts are inclined toward facilitating developers for the cessed building redevelopment projects by incentivizing them through making special provisions in the development control regulations. This research focuses on the entire process of redevelopment by the developers and issues faced by the related stakeholders in the same to reduce the stress on housing. It also highlights the loopholes in the current system and inefficient functioning of the process. The research was carried out by interviewing various developers, tenants and landlords in the island city who have already gone through redevelopment. From the case studies, it is very evident that redevelopment is undoubtedly a huge profit making business. In some cases, developers make profit of almost double the amount of the investment. But yet, satisfactory results are not seen on ground. It clearly indicates that there are some issues faced by developers which have not been addressed. Some of these issues include cumbersome legal procedures, negotiations with landlords and tenants, congestion and narrow roads, small size of the plots, informal practicing of ‘Pagdi system’ and financial viability of the project. This research recommends the up gradation of the existing cessed buildings by sharing the repairing and maintenance cost between landlords and tenants and also, income levels of tenants can be traced and housing vouchers or incentives can be provided to those who actual need it so that landlord does not have to subsidize the tenants. For redevelopment, the current interventions are generalized in nature as it does not take on ground issues into the consideration. There is need to identify local issues and give area specific solutions. And also, government should play a role of mediator to ensure all the stakeholders are satisfied and project gets completed on time.Keywords: cessed buildings, developers, government’s interventions, redevelopment, rent controls, tenants
Procedia PDF Downloads 18482 Exploratory Study of Individual User Characteristics That Predict Attraction to Computer-Mediated Social Support Platforms and Mental Health Apps
Authors: Rachel Cherner
Abstract:
Introduction: The current study investigates several user characteristics that may predict the adoption of digital mental health supports. The extent to which individual characteristics predict preferences for functional elements of computer-mediated social support (CMSS) platforms and mental health (MH) apps is relatively unstudied. Aims: The present study seeks to illuminate the relationship between broad user characteristics and perceived attraction to CMSS platforms and MH apps. Methods: Participants (n=353) were recruited using convenience sampling methods (i.e., digital flyers, email distribution, and online survey forums). The sample was 68% male, and 32% female, with a mean age of 29. Participant racial and ethnic breakdown was 75% White, 7%, 5% Asian, and 5% Black or African American. Participants were asked to complete a 25-minute self-report questionnaire that included empirically validated measures assessing a battery of characteristics (i.e., subjective levels of anxiety/depression via PHQ-9 (Patient Health Questionnaire 9-item) and GAD-7 (Generalized Anxiety Disorder 7-item); attachment style via MAQ (Measure of Attachment Qualities); personality types via TIPI (The 10-Item Personality Inventory); growth mindset and mental health-seeking attitudes via GM (Growth Mindset Scale) and MHSAS (Mental Help Seeking Attitudes Scale)) and subsequent attitudes toward CMSS platforms and MH apps. Results: A stepwise linear regression was used to test if user characteristics significantly predicted attitudes towards key features of CMSS platforms and MH apps. The overall regression was statistically significant (R² =.20, F(1,344)=14.49, p<.000). Conclusion: This original study examines the clinical and sociocultural factors influencing decisions to use CMSS platforms and MH apps. Findings provide valuable insight for increasing adoption and engagement with digital mental health support. Fostering a growth mindset may be a method of increasing participant/patient engagement. In addition, CMSS platforms and MH apps may empower under-resourced and minority groups to gain basic access to mental health support. We do not assume this final model contains the best predictors of use; this is merely a preliminary step toward understanding the psychology and attitudes of CMSS platform/MH app users.Keywords: computer-mediated social support platforms, digital mental health, growth mindset, health-seeking attitudes, mental health apps, user characteristics
Procedia PDF Downloads 9081 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique
Authors: Nishant Shrivastava, D. K. Sehgal
Abstract:
In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.Keywords: finite elements, Lagrangian, optimal stress location, serendipity
Procedia PDF Downloads 10480 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives
Authors: Alper T. Celebi, Ali Beskok
Abstract:
Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip
Procedia PDF Downloads 15679 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 53378 Seasonal Short-Term Effect of Air Pollution on Cardiovascular Mortality in Belgium
Authors: Natalia Bustos Sierra, Katrien Tersago
Abstract:
It is currently proven that both extremes of temperature are associated with increased mortality and that air pollution is associated with temperature. This relationship is complex, and in countries with important seasonal variations in weather such as Belgium, some effects can appear as non-significant when the analysis is done over the entire year. We, therefore, analyzed the effect of short-term outdoor air pollution exposure on cardiovascular mortality during the warmer and colder months separately. We used daily cardiovascular deaths from acute cardiovascular diagnostics according to the International Classification of Diseases, 10th Revision (ICD-10: I20-I24, I44-I49, I50, I60-I66) during the period 2008-2013. The environmental data were population-weighted concentrations of particulates with an aerodynamic diameter less than 10 µm (PM₁₀) and less than 2.5 µm (PM₂.₅) (daily average), nitrogen dioxide (NO₂) (daily maximum of the hourly average) and ozone (O₃) (daily maximum of the 8-hour running mean). A Generalized linear model was applied adjusting for the confounding effect of season, temperature, dew point temperature, the day of the week, public holidays and the incidence of influenza-like illness (ILI) per 100,000 inhabitants. The relative risks (RR) were calculated for an increase of one interquartile range (IQR) of the air pollutant (μg/m³). These were presented for the four hottest months (June, July, August, September) and coldest months (November, December, January, February) in Belgium. We applied both individual lag model and unconstrained distributed lag model methods. The cumulative effect of a four-day exposure (day of exposure and three consecutive days) was calculated from the unconstrained distributed lag model. The IQR for PM₁₀, PM₂.₅, NO₂, and O₃ were respectively 8.2, 6.9, 12.9 and 25.5 µg/m³ during warm months and 18.8, 17.6, 18.4 and 27.8 µg/m³ during cold months. The association with CV mortality was statistically significant for the four pollutants during warm months and only for NO₂ during cold months. During the warm months, the cumulative effect of an IQR increase of ozone for the age groups 25-64, 65-84 and 85+ was 1.066 (95%CI: 1.002-1.135), 1.041 (1.008-1.075) and 1.036 (1.013-1.058) respectively. The cumulative effect of an IQR increase of NO₂ for the age group 65-84 was 1.066 (1.020-1.114) during warm months and 1.096 (1.030-1.166) during cold months. The cumulative effect of an IQR increase of PM₁₀ during warm months reached 1.046 (1.011-1.082) and 1.038 (1.015-1.063) for the age groups 65-84 and 85+ respectively. Similar results were observed for PM₂.₅. The short-term effect of air pollution on cardiovascular mortality is greater during warm months for lower pollutant concentrations compared to cold months. Spending more time outside during warm months increases population exposure to air pollution and can, therefore, be a confounding factor for this association. Age can also affect the length of time spent outdoors and the type of physical activity exercised. This study supports the deleterious effect of air pollution on cardiovascular mortality (CV) which varies according to season and age groups in Belgium. Public health measures should, therefore, be adapted to seasonality.Keywords: air pollution, cardiovascular, mortality, season
Procedia PDF Downloads 16477 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia
Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu
Abstract:
Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models
Procedia PDF Downloads 19576 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index
Procedia PDF Downloads 15775 Discrete Element Simulations of Composite Ceramic Powders
Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat
Abstract:
Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography
Procedia PDF Downloads 13774 Changes in Cognition of Elderly People: A Longitudinal Study in Kanchanaburi Province, Thailand
Authors: Natchaphon Auampradit, Patama Vapattanawong, Sureeporn Punpuing, Malee Sunpuwan, Tawanchai Jirapramukpitak
Abstract:
Longitudinal studies related to cognitive impairment in elderly are necessary for health promotion and development. The purposes of this study were (1) to examine changes in cognition of elderly over time and (2) to examine the impacts of changes in social determinants of health (SDH) toward changes in cognition of elderly by using the secondary data derived from the Kanchanaburi Demographic Surveillance System (KDSS) by the Institute for Population and Social Research (IPSR) which contained longitudinal data on individuals, households, and villages. Two selected projects included the Health and Social Support for Elderly in KDSS in 2007 and the Population, Economic, Social, Cultural, and Long-term Care Surveillance for Thai Elderly People’s Health Promotion in 2011. The samples were 586 elderly participated in both projects. SDH included living arrangement, social relationships with children, relatives, and friends, household asset-based wealth index, household monthly income, loans for livings, loans for investment, and working status. Cognitive impairment was measured by category fluency and delayed recall. This study employed Generalized Estimating Equation (GEE) model to investigate changes in cognition by taking SDH and other variables such as age, gender, marital status, education, and depression into the model. The unstructured correlation structure was selected to use for analysis. The results revealed that 24 percent of elderly had cognitive impairment at baseline. About 13 percent of elderly still had cognitive impairment during 2007 until 2011. About 21 percent and 11 percent of elderly had cognitive decline and cognitive improvement, respectively. The cross-sectional analysis showed that household asset-based wealth index, social relationship with friends, working status, age, marital status, education, and depression were significantly associated with cognitive impairment. The GEE model revealed longitudinal effects of household asset-based wealth index and working status against cognition during 2007 until 2011. There was no longitudinal effect of social conditions against cognition. Elderly living with richer household asset-based wealth index, still being employed, and being younger were less likely to have cognitive impairment. The results strongly suggested that poorer household asset-based wealth index and being unemployed were served as a risk factor for cognitive impairment over time. Increasing age was still the major risk for cognitive impairment as well.Keywords: changes in cognition, cognitive impairment, elderly, KDSS, longitudinal study
Procedia PDF Downloads 13973 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 4772 mHealth-based Diabetes Prevention Program among Mothers with Abdominal Obesity: A Randomized Controlled Trial
Authors: Jia Guo, Qinyuan Huang, Qinyi Zhong, Yanjing Zeng, Yimeng Li, James Wiley, Kin Cheung, Jyu-Lin Chen
Abstract:
Context: Mothers with abdominal obesity, particularly in China, face challenges in managing their health due to family responsibilities. Existing diabetes prevention programs do not cater specifically to this demographic. Research Aim: To assess the feasibility, acceptability, and efficacy of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in reducing weight-related variables and diabetes risk. Methodology: A randomized controlled trial was conducted in Changsha, China, where the mHealth group received personalized modules and health messages, while the control group received general health education. Data were collected at baseline, 3 months, and 6 months. Findings: The mHealth intervention significantly improved waist circumference, modifiable diabetes risk scores, daily steps, self-efficacy for physical activity, social support for physical activity, and physical health satisfaction compared to the control group. However, no differences were found in BMI and certain other variables. Theoretical Importance: The study demonstrates the feasibility and efficacy of a tailored mHealth intervention for Chinese mothers with abdominal obesity, emphasizing the potential for such programs to improve health outcomes in this population. Data Collection: Data on various variables including weight-related measures, diabetes risk scores, behavioral and psychological factors were collected at baseline, 3 months, and 6 months from participants in the mHealth and control groups. Analysis Procedures: Generalized estimating equations were used to analyze the data collected from the mHealth and control groups at different time points during the study period. Question Addressed: The study addressed the effectiveness of an mHealth-based diabetes prevention program tailored for Chinese mothers with abdominal obesity in improving various health outcomes compared to traditional general health education approaches. Conclusion: The tailored mHealth intervention proved to be feasible and effective in improving weight-related variables, physical activity, and physical health satisfaction among Chinese mothers with abdominal obesity, highlighting its potential for delivering diabetes prevention programs to this population.Keywords: type 2 diabetes, mHealth, obesity, prevention, mothers
Procedia PDF Downloads 5671 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking
Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel
Abstract:
Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.Keywords: autism, schizotypy, creativity, cognitive control
Procedia PDF Downloads 13570 Anonymity and Irreplaceability: Gross Anatomical Practices in Japanese Medical Education
Authors: Ayami Umemura
Abstract:
Without exception, all the bodies dissected in the gross anatomical practices are bodies that have lived irreplaceable lives, laughing and talking with family and friends. While medical education aims to cultivate medical knowledge that is universally applicable to all human bodies, it relies on a unique, irreplaceable, and singular entity. In this presentation, we will explore the ``irreplaceable relationship'' that is cultivated between medical students and anonymous cadavers during gross anatomical practices, drawing on Emmanuel Levinas's ``ethics of the face'' and Martin Buber's discussion of “I-Thou.'' Through this, we aim to present ``a different ethic'' that emerges only in the context of face-to-face relationships, which differs from the generalized, institutionalized, mass-produced ethics like seen in so-called ``ethics codes.'' Since the 1990s, there has been a movement around the world to use gross anatomical practices as an "educational tool" for medical professionalism and medical ethics, and some educational institutions have started disclosing the actual names, occupations, and places of birth of corpses to medical students. These efforts have also been criticized because they lack medical calmness. In any case, the issue here is that this information is all about the past that medical students never know directly. The critical fact that medical students are building relationships from scratch and spending precious time together without any information about the corpses before death is overlooked. Amid gross anatomical practices, a medical student is exposed to anonymous cadavers with faces and touching and feeling them. In this presentation, we will examine a collection of essays written by medical students on gross anatomical practices collected by the Japanese Association for Volunteer Body Donation from medical students across the country since 1978. There, we see the students calling out to the corpse, being called out to, being encouraged, superimposing the carcasses on their own immediate family, regretting parting, and shedding tears. Then, medical students can be seen addressing the dead body in the second person singular, “you.” These behaviors reveal an irreplaceable relationship between the anonymous cadavers and the medical students. The moment they become involved in an irreplaceable relationship between “I and you,” an accidental and anonymous encounter becomes inevitable. When medical students notice being the inevitable takers of voluntary and addressless gifts, they pledge to become “Good Doctors” owing the anonymous persons. This presentation aims to present “a different ethic” based on uniqueness and irreplaceability that comes from the faces of the others embedded in each context, which is different from “routine” and “institutionalized” ethics. That can only be realized ``because of anonymity''.Keywords: anonymity, irreplaceability, uniqueness, singularlity, emanuel levinas, martin buber, alain badiou, medical education
Procedia PDF Downloads 6069 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach
Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené
Abstract:
Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.Keywords: agency theory, credit risk, internal controls, revised COSO framework
Procedia PDF Downloads 315