Search results for: local-global bias
93 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 29092 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 9391 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 35190 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 13689 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 41888 Effective Service Provision and Multi-Agency Working in Service Providers for Children and Young People with Special Educational Needs and Disabilities: A Mixed Methods Systematic Review
Authors: Natalie Tyldesley-Marshall, Janette Parr, Anna Brown, Yen-Fu Chen, Amy Grove
Abstract:
It is widely recognised in policy and research that the provision of services for children and young people (CYP) with Special Educational Needs and Disabilities (SEND) is enhanced when health and social care, and education services collaborate and interact effectively. In the UK, there have been significant changes to policy and provisions which support and improve collaboration. However, professionals responsible for implementing these changes face multiple challenges, including a lack of specific implementation guidance or framework to illustrate how effective multi-agency working could or should work. This systematic review will identify the key components of effective multi-agency working in services for CYP with SEND; and the most effective forms of partnership working in this setting. The review highlights interventions that lead to service improvements; and the conditions in the local area that support and encourage success. A protocol was written and registered with PROSPERO registration: CRD42022352194. Searches were conducted on several health, care, education, and applied social science databases from the year 2012 onwards. Citation chaining has been undertaken, as well as broader grey literature searching to enrich the findings. Qualitative, quantitative, mixed methods studies and systematic reviews were included, assessed independently, and critically appraised or assessed for risk of bias using appropriate tools based on study design. Data were extracted in NVivo software and checked by a more experienced researcher. A convergent segregated approach to synthesis and integration was used in which the quantitative and qualitative data were synthesised independently and then integrated using a joint display integration matrix. Findings demonstrate the key ingredients for effective partnership working for services delivering SEND. Interventions deemed effective are described, and lessons learned across interventions are summarised. Results will be of interest to educators and health and social care professionals that provide services to those with SEND. These will also be used to develop policy recommendations for how UK healthcare, social care, and education services for CYP with SEND aged 0-25 can most effectively collaborate and achieve service improvement. The review will also identify any gaps in the literature to recommend areas for future research. Funding for this review was provided by the Department for Education.Keywords: collaboration, joint commissioning, service delivery, service improvement
Procedia PDF Downloads 10787 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 22286 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 32385 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 14884 Social Perspective of Gender Biasness Among Rural Children in Haryna State of India
Authors: Kamaljeet Kaur, Vinod Kumari, Jatesh Kathpalia, Bas Kaur
Abstract:
A gender bias towards girl child is pervasive across the world. It is seen in all the strata of the society and manifests in various forms. However nature and extent of these inequalities are not uniform. Generally these inequalities are more prevalent in patriarchal society. Despite emerging and increasing opportunities for women, there are still inequalities between men and women in each and every sphere like education, health, economy, polity and social sphere. Patriarchal ideology as a cultural norm enforces gender construction which is oriented toward hierarchical relations between the sexes and neglect of women in Indian society. Discrimination to girls may also vary by their age and be restricted to the birth order and sex composition of her elder surviving siblings. The present study was conducted to know the gender discrimination among rural children in India. The respondents were selected from three generations as per AICRP age group viz, 18-30 years (3rd generation), 31-60 years (2nd generation) and above 60 years (1st generation). A total sample size was 600 respondents from different villages of two districts of Haryana state comprising of half males and half females. Data were collected using personal interview schedule and analysed by SPSS software. Among the total births 46.35 per cent were girl child and 53.64 % were male child. Dropout rate was more in female children as compared to male children i.e. near about one third (31.09%) female children dropped school followed by 21.17 % male children. It was quite surprising that near about two-third (61.16%) female children and more than half (59.22%) of the male children dropped school. Cooking was mainly performed by adult female with overall mean scores 2.0 and ranked first which was followed by female child (1.7 mean scores) clearly indicating that cooking was the activity performed mainly by females while activity related to purchase of fruits and vegetable, cereals and pulses was mainly done by adult male. First preference was given to male child for serving of costly and special food. Regarding professional aspiration of children of the respondents’ families, it was observed that 20.10% of the male children wanted to become engineer, whereas only 3.89 % female children wanted to become engineer. Ratio of male children was high in both generations irrespective of the districts. School dropouts were more in case of female in both the 1st and 2 nd generations. The main reasons of school dropout were lack of interest, lack of resources and early marriage in both the generations. Female enrolment was more in faculty of arts, whereas in case of male percentage it was more in faculty of non-medical and medical which showed that female children were getting traditional type of education. It is suggested to provide equal opportunities to girls and boys in home as well as outside the home for smooth functioning of society.Keywords: gender biasness, male child, female child, education, home
Procedia PDF Downloads 8683 Minding the Gap: Consumer Contracts in the Age of Online Information Flow
Authors: Samuel I. Becher, Tal Z. Zarsky
Abstract:
The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior
Procedia PDF Downloads 19882 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 24681 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 19780 The Adaptive Role of Negative Emotions in Optimal Functioning
Authors: Brianne Nichols, John A. Parkinson
Abstract:
Positive Psychology has provided a rich understanding of the beneficial effects of positive emotions in relation to optimal functioning, and research has been devoted to promote states of positive feeling and thinking. While this is a worthwhile pursuit, positive emotions are not useful in all contexts - some situations may require the individual to make use of their negative emotions to reach a desired end state. To account for the potential value of a wider range of emotional experiences that are common to the human condition, Positive Psychology needs to expand its horizons and investigate how individuals achieve positive outcomes using varied means. The current research seeks to understand the positive psychology of fear of failure (FF), which is a commonly experienced negative emotion relevant to most life domains. On the one hand, this emotion has been linked with avoidance motivation and self-handicap behaviours, on the other; FF has been shown to act as a drive to move the individual forward. To fully capture the depth of this highly subjective emotional experience and understand the circumstances under which FF may be adaptive, this study adopted a mixed methods design using SenseMaker; a web-based tool that combines the richness of narratives with the objectivity of numerical data. Two hundred participants consisting mostly of undergraduate university students shared a story of a time in the recent past when they feared failure of achieving a valued goal. To avoid researcher bias in the interpretation of narratives, participants self-signified their stories in a tagging system that was based on researchers’ aim to explore the role of past failures, the cognitive, emotional and behavioural profile of individuals high and low in FF, and the relationship between these factors. In addition, the role of perceived personal control and self-esteem were investigated in relation to FF using self-report questionnaires. Results from quantitative analyses indicated that individuals with high levels of FF, compared to low, were strongly influenced by past failures and preoccupied with their thoughts and emotions relating to the fear. This group also reported an unwillingness to accept their internal experiences, which in turn was associated with withdrawal from goal pursuit. Furthermore, self-esteem was found to mediate the relationship between perceived control and FF, suggesting that self-esteem, with or without control beliefs, may have the potential to buffer against high FF. It is hoped that the insights provided by the current study will inspire future research to explore the ways in which ‘acceptance’ may help individuals keep moving towards a goal despite the presence of FF, and whether cultivating a non-contingent self-esteem is the key to resilience in the face of failures.Keywords: fear of failure, goal-pursuit, negative emotions, optimal functioning, resilience
Procedia PDF Downloads 19579 Gender Specific Differences in Clinical Outcomes of Knee Osteoarthritis Treated with Micro-Fragmented Adipose Tissue
Authors: Tiffanie-Marie Borg, Yasmin Zeinolabediny, Nima Heidari, Ali Noorani, Mark Slevin, Angel Cullen, Stefano Olgiati, Alberto Zerbi, Alessandro Danovi, Adrian Wilson
Abstract:
Knee Osteoarthritis (OA) is a critical cause of disability globally. In recent years, there has been growing interest in non-invasive treatments, such as intra-articular injection of micro-fragmented fat (MFAT), showing great potential in treating OA. Mesenchymal stem cells (MSCs), originating from pericytes of micro-vessels in MFAT, can differentiate into mesenchymal lineage cells such as cartilage, osteocytes, adipocytes, and osteoblasts. Secretion of growth factor and cytokines from MSCs have the capability to inhibit T cell growth, reduced pain and inflammation, and create a micro-environment that through paracrine signaling, can promote joint repair and cartilage regeneration. Here we have shown, for the first time, data supporting the hypothesis that women respond better in terms of improvements in pain and function to MFAT injection compared to men. Historically, women have been underrepresented in studies, and studies with both sexes regularly fail to analyse the results by sex. To mitigate this bias and quantify it, we describe a technique using reproducible statistical analysis and replicable results with Open Access statistical software R to calculate the magnitude of this difference. Genetic, hormonal, environmental, and age factors play a role in our observed difference between the sexes. This observational, intention-to-treat study included the complete sample of 456 patients who agreed to be scored for pain (visual analogue scale (VAS)) and function (Oxford knee score (OKS)) at baseline regardless of subsequent changes to adherence or status during follow-up. We report that a significantly larger number of women responded to treatment than men: [90% vs. 60% change in VAS scores with 87% vs. 65% change in OKS scores, respectively]. Women overall had a stronger positive response to treatment with reduced pain and improved mobility and function. Pre-injection, our cohort of women were in more pain with worse joint function which is quite common to see in orthopaedics. However, during the 2-year follow-up, they consistently maintained a lower incidence of discomfort with superior joint function. This data clearly identifies a clear need for further studies to identify the cell and molecular biological and other basis for these differences and be able to utilize this information for stratification in order to improve outcome for both women and men.Keywords: gender differences, micro-fragmented adipose tissue, knee osteoarthritis, stem cells
Procedia PDF Downloads 18178 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students
Authors: Stephanie Jo Marchese
Abstract:
This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods
Procedia PDF Downloads 28977 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review
Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed
Abstract:
Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy
Procedia PDF Downloads 13276 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 18875 Comparison of Two Home Sleep Monitors Designed for Self-Use
Authors: Emily Wood, James K. Westphal, Itamar Lerner
Abstract:
Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.Keywords: DREEM, EEG, seep monitoring, Z-machine
Procedia PDF Downloads 10774 Just Child Protection Practice for Immigrant and Racialized Families in Multicultural Western Settings: Considerations for Context and Culture
Authors: Sarah Maiter
Abstract:
Heightened globalization, migration, displacement of citizens, and refugee needs is putting increasing demand for approaches to social services for diverse populations that responds to families to ensure the safety and protection of vulnerable members while providing supports and services. Along with this social works re-focus on socially just approaches to practice increasingly asks social workers to consider the challenging circumstances of families when providing services rather than a focus on individual shortcomings alone. Child protection workers then struggle to ensure safety of children while assessing the needs of families. This assessment can prove to be difficult when providing services to immigrant, refugee, and racially diverse families as understanding of and familiarity with these families is often limited. Furthermore, child protection intervention in western countries is state mandated having legal authority when intervening in the lives of families where child protection concerns have been identified. Within this context, racialized immigrant and refugee families are at risk of misunderstandings that can result in interventions that are overly intrusive, unhelpful, and harsh. Research shows disproportionality and overrepresentation of racial and ethnic minorities, and immigrant families in the child protection system. Reasons noted include: a) possibilities of racial bias in reporting and substantiating abuse, b) struggles on the part of workers when working with families from diverse ethno-racial backgrounds and who are immigrants and may have limited proficiency in the national language of the country, c) interventions during crisis and differential ongoing services for these families, d) diverse contexts of these families that poses additional challenges for families and children, and e) possible differential definitions of child maltreatment. While cultural and ethnic diversity in child rearing approaches have been cited as contributors to child protection concerns, this approach should be viewed cautiously as it can result in stereotyping and generalizing that then results in inappropriate assessment and intervention. However, poverty and the lack of social supports, both well-known contributors to child protection concerns, also impact these families disproportionately. Child protection systems, therefore, need to continue to examine policy and practice approaches with these families that ensures safety of children while balancing the needs of families. This presentation provides data from several research studies that examined definitions of child maltreatment among a sample of racialized immigrant families, experiences of a sample of immigrant families with the child protection system, concerns of a sample of child protection workers in the provision of services to these families, and struggles of families in the transitions to their new country. These studies, along with others provide insights into areas of consideration for practice that can contribute to safety for children while ensuring just and equitable responses that have greater potential for keeping families together rather than premature apprehension and removal of children to state care.Keywords: child protection, child welfare services, immigrant families, racial and ethnic diversity
Procedia PDF Downloads 29273 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin
Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski
Abstract:
Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin
Procedia PDF Downloads 12872 Effectiveness, Safety, and Tolerability Profile of Stribild® in HIV-1-infected Patients in the Clinical Setting
Authors: Heiko Jessen, Laura Tanus, Slobodan Ruzicic
Abstract:
Objectives: The efficacy of Stribild®, an integrase strand transfer inhibitor (INSTI) -based STR, has been evaluated in randomized clinical trials and it has demonstrated durable capability in terms of achieving sustained suppression of HIV-1 RNA-levels. However, differences in monitoring frequency, existing selection bias and profile of patients enrolled in the trials, may all result in divergent efficacy of this regimen in routine clinical settings. The aim of this study was to assess the virologic outcomes, safety and tolerability profile of Stribild® in a routine clinical setting. Methods: This was a retrospective monocentric analysis on HIV-1-infected patients, who started with or were switched to Stribild®. Virological failure (VF) was defined as confirmed HIV-RNA>50 copies/ml. The minimum time of follow-up was 24 weeks. The percentage of patients remaining free of therapeutic failure was estimated using the time-to-loss-of-virologic-response (TLOVR) algorithm, by intent-to-treat analysis. Results: We analyzed the data of 197 patients (56 ART-naïve and 141 treatment-experienced patients), who fulfilled the inclusion criteria. Majority (95.9%) of patients were male. The median time of HIV-infection at baseline was 2 months in treatment-naïve and 70 months in treatment-experienced patients. Median time [IQR] under ART in treatment-experienced patients was 37 months. Among the treatment-experienced patients 27.0% had already been treated with a regimen consisting of two NRTIs and one INSTI, whereas 18.4% of them experienced a VF. The median time [IQR] of virological suppression prior to therapy with Stribild® in the treatment-experienced patients was 10 months [0-27]. At the end of follow-up (median 33 months), 87.3% (95% CI, 83.5-91.2) of treatment-naïve and 80.3% (95% CI, 75.8-84.8) of treatment-experienced patients remained free of therapeutic failure. Considering only treatment-experienced patients with baseline VL<50 copies/ml, 83.0% (95% CI, 78.5-87.5) remained free of therapeutic failure. A total of 17 patients stopped treatment with Stribild®, 5.4% (3/56) of them were treatment-naïve and 9.9% (14/141) were treatment-experienced patients. The Stribild® therapy was discontinued in 2 (1.0%) because of VF, loss to follow-up in 4 (2.0%), and drug-drug interactions in 2 (1.0%) patients. Adverse events were in 7 (3.6%) patients the reason to switch from therapy with Stribild® and further 2 (1.0%) patients decided personally to switch. The most frequently observed adverse events were gastrointestinal side effects (20.0%), headache (8%), rash events (7%) and dizziness (6%). In two patients we observed an emergence of novel resistances in integrase-gene. The N155H evolved in one patient and resulted in VF. In another patient S119R evolved either during or shortly upon switch from therapy with Stribild®. In one further patient with VF two novel mutations in the RT-gene were observed when compared to historical genotypic test result (V106I/M and M184V), whereby it is not clear whether they evolved during or already before the switch to Stribild®. Conclusions: Effectiveness of Stribild® for treatment-naïve patients was consistent with data obtained in clinical trials. The safety and tolerability profile as well as resistance development confirmed clinical efficacy of Stribild® in a daily practice setting.Keywords: ART, HIV, integrase inhibitor, stribild
Procedia PDF Downloads 28571 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.Keywords: HbA1C, T2DM, SBP, FBS
Procedia PDF Downloads 1170 Narcissism in the Life of Howard Hughes: A Psychobiographical Exploration
Authors: Alida Sandison, Louise A. Stroud
Abstract:
Narcissism is a personality configuration which has both normal and pathological personality expressions. Narcissism is highly complex, and is linked to a broad field of research. There are both dimensional and categorical conceptualisations of narcissism, and a variety of theoretical formulations that have been put forward to understand the narcissistic personality configuration. Currently, Kernberg’s Object Relations theory is well supported for this purpose. The complexity and particular defense mechanisms at play in the narcissistic personality make it a difficult personality configuration worth further research. Psychobiography as a methodology allows for the exploration of the lived life, and is thus a useful methodology to surmount these inherent challenges. Narcissism has been a focus of academic interest for a long time, and although there is a lot of research done in this area, to the researchers' knowledge, narcissistic dynamics have never been explored within a psychobiographical format. Thus, the primary aim of the research was to explore and describe narcissism in the life of Howard Hughes, with the objective of gaining further insight into narcissism through the use of this unconventional research approach. Hughes was chosen as subject for the study as he is renowned as an eccentric billionaire who had his revolutionary effect on the world, but was concurrently disturbed within his personal pathologies. Hughes was dynamic in three different sectors, namely motion pictures, aviation and gambling. He became more and more reclusive as he entered into middle age. From his early fifties he was agoraphobic, and the social network of connectivity that could reasonably be expected from someone in the top of their field was notably distorted. Due to his strong narcissistic personality configuration, and the interpersonal difficulties he experienced, Hughes represents an ideal figure to explore narcissism. The study used a single case study design, and purposive sampling to select Hughes. Qualitative data was sampled, using secondary data sources. Given that Hughes was a famous figure, there is a plethora of information on his life, which is primarily autobiographical. This includes books written about his life, and archival material in the form of newspaper articles, interviews and movies. Gathered data were triangulated to avoid the effect of author bias, and increase the credibility of the data used. It was collected using Yin’s guidelines for data collection. Data was analysed using Miles and Huberman strategy of data analysis, which consists of three steps, namely, data reduction, data display, and conclusion drawing and verification. Patterns which emerged in the data highlighted the defense mechanisms used by Hughes, in particular that of splitting and projection, in defending his sense of self. These defense mechanisms help us to understand the high levels of entitlement and paranoia experienced by Hughes. Findings provide further insight into his sense of isolation and difference, and the consequent difficulty he experienced in maintaining connections with others. Findings furthermore confirm the effectiveness of Kernberg’s theory in understanding narcissism observing an individual life.Keywords: Howard Hughes, narcissism, narcissistic defenses, object relations
Procedia PDF Downloads 35769 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 30968 Construal Level Perceptions of Environmental vs. Social Sustainability in Online Fashion Shopping Environments
Authors: Barbara Behre, Verolien Cauberghe, Dieneke Van de Sompel
Abstract:
Sustainable consumption is on the rise, yet it has still not entered the mainstream in several industries, such as the fashion industry. In online fashion contexts, sustainability cues have been used to signal the sustainable benefits of certain garments to promote sustainable consumption. These sustainable cues may focus on the ecological or social dimension of sustainability. Since sustainability, in general, relates to distant, abstract benefits, the current study aims to examine if and how psychological distance may mediate the effects of exposure to different sustainability cues on consumption outcomes. Following the framework of Construal Level Theory of Psychological Distance, reduced psychological distance renders the construal level more concrete, which may influence attitudes and subsequent behavior in situations like fashion shopping. Most studies investigated sustainability as a composite, failing to differentiate between ecological and societal aspects of sustainability. The few studies examining sustainability more in detail uncovered that environmental sustainability is rather perceived in abstract cognitive construal, whereas social sustainability is linked to concrete construal. However, the construal level affiliation of the sustainability dimensions likely is not universally applicable to different domains and stages of consumption, which further suggest a need to clarify the relationships between environmental and social sustainability dimensions and the construal level of psychological distance within fashion brand consumption. While psychological distance and construal level have been examined in the context of sustainability, these studies yielded mixed results. The inconsistent findings of past studies might be due to the context-dependence of psychological distance as inducing construal differently in diverse situations. Especially in a hedonic consumption context like online fashion shopping, the role of visual processing of information could determine behavioural outcomes as linked to situational construal. Given the influence of the mode of processing on psychological distance and construal level, the current study examines the moderating role of verbal versus non-verbal presentation of the sustainability cues. In a 3 (environmental sustainability vs. social sustainability vs. control) x 2 (non-verbal message vs. verbal message) between subjects experiment, the present study thus examines how consumers evaluate sustainable brands in online shopping contexts in terms of psychological distance and construal level, as well as the impact on brand attitudes and buying intentions. The results among 246 participants verify the differential impact of the sustainability dimensions on fashion brand purchase intent as mediated by construal level and perceived psychological distance. The ecological sustainability cue is perceived as more concrete, which might be explained by consumer bias induced by the predominance of pro-environmental sustainability messages. The verbal versus non-verbal presentation of the sustainability cue neither had a significant influence on distance perceptions and construal level nor on buying intentions. This study offers valuable contributions to the sustainable consumption literature, as well as a theoretical basis for construal-level framing as applied in sustainable fashion branding.Keywords: construal level theory, environmental vs social sustainability, online fashion shopping, sustainable fashion
Procedia PDF Downloads 10367 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11466 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 9265 Gender Gap in Returns to Social Entrepreneurship
Authors: Saul Estrin, Ute Stephan, Suncica Vujic
Abstract:
Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences
Procedia PDF Downloads 24464 Use of PACER Application as Physical Activity Assessment Tool: Results of a Reliability and Validity Study
Authors: Carine Platat, Fatima Qshadi, Ghofran Kayed, Nour Hussein, Amjad Jarrar, Habiba Ali
Abstract:
Nowadays, smartphones are very popular. They are offering a variety of easy-to-use and free applications among which step counters and fitness tests. The number of users is huge making of such applications a potentially efficient new strategy to encourage people to become more active. Nonetheless, data on their reliability and validity are very scarce and when available, they are often negative and contradictory. Besides, weight status, which is likely to introduce a bias in the physical activity assessment, was not often considered. Hence, the use of these applications as motivational tool, assessment tool and in research is questionable. PACER is one of the free step counters application. Even though it is one of the best rated free application by users, it has never been tested for reliability and validity. Prior any use of PACER, this remains to be investigated. The objective of this work is to investigate the reliability and validity of the smartphone application PACER in measuring the number of steps and in assessing the cardiorespiratory fitness by the 6 minutes walking test. 20 overweight or obese students (10 male and 10 female) were recruited at the United Arab Emirate University, aged between 18 and 25 years old. Reliability and validity were tested in real life conditions and in controlled conditions by using a treadmill. Test-retest experiments were done with PACER on 2 days separated by a week in real life conditions (24 hours each time) and in controlled conditions (30 minutes on treadmill, 3km/h). Validity was tested against the pedometer OMRON in the same conditions. During treadmill test, video was recorded and steps numbers were compared between PACER, pedometer and video. The validity of PACER in estimating the cardiorespiratory fitness (VO2max) as part of the 6 minutes walking test (6MWT) was studied against the 20m shuttle running test. Reliability was studied by calculating intraclass correlation coefficients (ICC), 95% confidence interval (95%CI) and by Bland-Altman plots. Validity was studied by calculating Spearman correlation coefficient (rho) and Bland-Altman plots. PACER reliability was good in both male and female in real life conditions (p≤10-3) but only in female in controlled conditions (p=0.01). PACER was valid against OMRON pedometer in male and female in real life conditions (rho=0.94, p≤10-3 ; rho=0.64, p=0.01, in male and female respectively). In controlled conditions, PACER was not valid against pedometer. But, PACER was valid against video in female (rho=0.72, p≤10-3). PACER was valid against the shuttle run test in male and female (rho-=0.66, p=0.01 ; rho=0.51, p=0.04) to estimate VO2max. This study provides data on the reliability and viability of PACER in overweight or obese male and female young adults. Globally, PACER was shown as reliable and valid in real life conditions in overweight or obese male and female to count steps and assess fitness. This supports the use of PACER to assess and promote physical activity in clinical follow-up and community interventions.Keywords: smartphone application, pacer, reliability, validity, steps, fitness, physical activity
Procedia PDF Downloads 452