Search results for: robust penalized regression
1138 Transition from Linear to Circular Business Models with Service Design Methodology
Authors: Minna-Maari Harmaala, Hanna Harilainen
Abstract:
Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.Keywords: business model innovation, circular economy, circular economy business models, service design
Procedia PDF Downloads 1351137 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test
Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati
Abstract:
Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.Keywords: validation, HPLC, plasma, bioequivalence
Procedia PDF Downloads 2911136 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process
Authors: Johannes Gantner, Michael Held, Matthias Fischer
Abstract:
The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation
Procedia PDF Downloads 2861135 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1221134 The Moderating Roles of Bedtime Activities and Anxiety and Depression in the Relationship between Attention-Deficit/Hyperactivity Disorder and Sleep Problems in Children
Authors: Lian Tong, Yan Ye, Qiong Yan
Abstract:
Background: Children with attention-deficit/hyperactivity disorder (ADHD) often experience sleep problems, but the comorbidity mechanism has not been sufficiently studied. This study aimed to determine the comorbidity of ADHD and sleep problems as well as the moderating effects of bedtime activities and depression/anxiety symptoms on the relationship between ADHD and sleep problems. Methods: We recruited 934 primary students from third to fifth grade and their parents by stratified random sampling from three primary schools in Shanghai, China. This study used parent-reported versions of the ADHD Rating Scale-IV, Children’s Sleep Habits Questionnaire, and Achenbach Child Behavior Checklist. We used hierarchical linear regression analysis to clarify the moderating effects of bedtime activities and depression/anxiety symptoms. Results: We found that children with more ADHD symptoms had shorter sleep durations and more sleep problems on weekdays. Screen time before bedtime strengthened the relationship between ADHD and sleep-disordered breathing. Children with more screen time were more likely to have sleep onset delay, while those with less screen time had more sleep onset problems with increasing ADHD symptoms. The high bedtime eating group experienced more night waking with increasing ADHD symptoms compared with the low bedtime eating group. Anxiety/depression exacerbated total sleep problems and further interacted with ADHD symptoms to predict sleep length and sleep duration problems. Conclusions: Bedtime activities and emotional problems had important moderating effects on the relationship between ADHD and sleep problems. These findings indicate that appropriate bedtime management and emotional management may reduce sleep problems and improve sleep duration for children with ADHD symptoms.Keywords: ADHD, sleep problems, anxiety/depression, bedtime activities, children
Procedia PDF Downloads 2041133 An Investigation of Item Bias in Free Boarding and Scholarship Examination in Turkey
Authors: Yeşim Özer Özkan, Fatma Büşra Fincan
Abstract:
Biased sample is a regression of an observation, design process and all of the specifications lead to tendency of a side or the situation of leaving from the objectivity. It is expected that, test items are answered by the students who come from different social groups and the same ability not to be different from each other. The importance of the expectation increases especially during student selection and placement examinations. For example, all of the test items should not be beneficial for just a male or female group. The aim of the research is an investigation of item bias whether or not the exam included in 2014 free boarding and scholarship examination in terms of gender variable. Data which belong to 5th, 6th, and 7th grade the secondary education students were obtained by the General Directorate of Measurement, Evaluation and Examination Services in Turkey. 20% students were selected randomly within 192090 students. Based on 38418 students’ exam paper were examined for determination item bias. Winsteps 3.8.1 package program was used to determine bias in analysis of data, according to Rasch Model in respect to gender variable. Mathematics items tests were examined in terms of gender bias. Firstly, confirmatory factor analysis was applied twenty-five math questions. After that, NFI, TLI, CFI, IFI, RFI, GFI, RMSEA, and SRMR were examined in order to be validity and values of goodness of fit. Modification index values of confirmatory factor analysis were examined and then some of the items were omitted because these items gave an error in terms of model conformity and conceptual. The analysis shows that in 2014 free boarding and scholarship examination exam does not include bias. This is an indication of the gender of the examination to be made in favor of or against different groups of students.Keywords: gender, item bias, placement test, Rasch model
Procedia PDF Downloads 2301132 Disadvantaged Adolescents and Educational Delay in South Africa: Impacts of Personal, Family, and School Characteristics
Authors: Rocio Herrero Romero, Lucie Cluver, James Hall, Janina Steinert
Abstract:
Educational delay and non-completion are major policy concerns in South Africa. However, little research has focused on predictors for educational delay amongst adolescents in disadvantaged areas. This study has two aims: first, to use data integration approaches to compare the educational delay of 599 adolescents aged 16 to 18 from disadvantaged communities to national and provincial representative estimates in South Africa. Second, the paper also explores predictors for educational delay by comparing adolescents out of school (n=64) and at least one year behind (n=380), with adolescents in the age-appropriate grade or higher (n=155). Multinomial logistic regression models using self-report and administrative data were applied to look for significant associations of risk and protective factors. Significant risk factors for being behind (rather than in age-appropriate grade) were: male gender, past grade repetition, rural location and larger school size. Risk factors for being out of school (rather than in the age-appropriate grade) were: past grade repetition, having experienced problems concentrating at school, household poverty, and food insecurity. Significant protective factors for being in the age-appropriate grade (rather than out of school) were: living with biological parents or grandparents and access to school counselling. Attending school in wealthier communities was a significant protective factor for being in the age-appropriate grade (rather than behind). Our results suggest that both personal and contextual factors –family and school- predicted educational delay. This study provides new evidence to the significant effects of personal, family, and school characteristics on the educational outcomes of adolescents from disadvantaged communities in South Africa. This is the first longitudinal and quantitative study to systematically investigate risk and protective factors for post-compulsory educational outcomes amongst South African adolescents living in disadvantaged communities.Keywords: disadvantaged communities, quantitative analysis, school delay, South Africa
Procedia PDF Downloads 3481131 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 381130 Evaluation of Social Studies Curriculum Implementation of Bachelor of Education Degree in Colleges of Education in Southwestern Nigeria
Authors: F. A. Adesoji, A. A. Ayandele
Abstract:
There has been a concern over non-responsiveness of educational programme in Nigeria’s higher institutions to adequately meet social needs. The study, therefore, investigated the effectiveness of basic elements of the Social Studies Curriculum, the contributions of the Teacher–Related Variables (TRV) such as qualification, area of specialization, teaching experience, teaching methods, gender and teaching facilities to the implementation of the curriculum (IOC) in the Colleges of Education (COEs). The study adopted the descriptive survey design. Four COEs in Oyo, Osun, Ondo and Lagos States were purposively selected. Stratified sampling technique was used to select 455 Social Studies students and 47 Social Studies lecturers. Stakeholders’ Perception of Social Studies Curriculum (r = 0.86), Social Studies Curriculum Resources scale (r = 0.78) and Social Studies Basic Concepts Test (r = 0.78) were used for data collection. Data were analysed using descriptive statistics, multiple regression, and t-test at 0.05 level of significance. COEs teachers and students rated the elements of the curriculum to be effective with mean scores x̄ =3.02 and x̄ =2.80 respectively; x̄ =5.00 and x̄ = 2.50 being the maximum and minimum mean scores. The finding showed average level of availability (x̄ =1.60), adequacy (x̄ =1.55) and utilization (x̄ =1.64) of teaching materials, x̄ =3.00 and x̄ =1.50 being maximum and minimum mean scores respectively. Academic performance of the students is on average with the mean score of x̄ =51.4775 out of maximum mean score of x̄ =100. The TRV and teaching facilities had significant composite contribution to IOC (F (6,45) = 3.92:R² = 0.26) with 39% contributions to the variance of IOC. Area of specialization (β= 29, t = 2.05) and teaching facilities (β = -25, t = 1.181) contributed significantly. The implementation of bachelor degree in Social Studies curriculum was effective in the colleges of education. There is the need to beef-up the provision of facilities to improve the implementation of the curriculum.Keywords: bachelor degree in social studies, colleges of education in southwestern Nigeria, curriculum implementation, social studies curriculum
Procedia PDF Downloads 3891129 Determinants of Rural Household Effective Demand for Biogas Technology in Southern Ethiopia
Authors: Mesfin Nigussie
Abstract:
The objectives of the study were to identify factors affecting rural households’ willingness to install biogas plant and amount willingness to pay in order to examine determinants of effective demand for biogas technology. A multistage sampling technique was employed to select 120 respondents for the study. The binary probit regression model was employed to identify factors affecting rural households’ decision to install biogas technology. The probit model result revealed that household size, total household income, access to extension services related to biogas, access to credit service, proximity to water sources, perception of households about the quality of biogas, perception index about attributes of biogas, perception of households about installation cost of biogas and availability of energy source were statistically significant in determining household’s decision to install biogas. Tobit model was employed to examine determinants of rural household’s amount of willingness to pay. Based on the model result, age of the household head, total annual income of the household, access to extension service and availability of other energy source were significant variables that influence willingness to pay. Providing due considerations for extension services, availability of credit or subsidy, improving the quality of biogas technology design and minimizing cost of installation by using locally available materials are the main suggestions of this research that help to create effective demand for biogas technology.Keywords: biogas technology, effective demand, probit model, tobit model, willingnes to pay
Procedia PDF Downloads 1401128 Self-reported Acute Pesticide Intoxication in Ethiopia
Authors: Amare Nigatu, Mågne Bratveit, Bente E. Moen
Abstract:
Background: Pesticide exposure is an important public health concern in Ethiopia, but there is limited information on pesticide intoxications. Residents may have an increased risk of pesticide exposure through proximity of their homes to farms using pesticides. Also the pesticide exposure might be related to employment at these farms. This study investigated the prevalence of acute pesticide intoxications (API) by residence proximity to a nearby flower farm and assessed if intoxications are related to working there or not. Methods: A cross-sectional survey involving 516 persons was conducted. Participants were grouped according to their residence proximity from a large flower farm; living within 5 kilometers and 5-12 kilometers away, respectively. In a structured interview, participants were asked if they had health symptoms within 48 hours of pesticide exposure in the past year. Those, who had experienced this and reported two or more typical pesticide intoxication symptoms, were considered as having had API. Chi-square and independent t-tests were used to compare categorical and continuous variables, respectively. Confounding variables were adjusted by using binomial regression model. Results: The prevalence of API in the past year among the residents in the study area was 26%, and it was higher in the population living close to the flower farm (42%) compared to those living far away (11%), prevalence ratio (PR) = 3.2, 95% CI: 2.2-4.8, adjusted for age, gender & education. A subgroup living close to the farm & working there had significantly more API (56%) than those living close & did not work there (16%), adjusted PR = 3.0, 95% CI: 1.8-4.9. Flower farm workers reported more API (56%) than those not working there (13%,), adjusted PR = 4.0, 95% CI: 2.9-5.6. Conclusion: The residents living closer than 5 kilometers to the flower farm reported significantly higher prevalence of API than those living 5-12 kilometers away. This increased risk of API was associated with work at the flower farm.Keywords: acute pesticide intoxications, self-reported symptoms, flower farm workers, living proximity
Procedia PDF Downloads 2921127 Exploring Legal Liabilities of Mining Companies for Human Rights Abuses: Case Study of Mongolian Mine
Authors: Azzaya Enkhjargal
Abstract:
Context: The mining industry has a long history of human rights abuses, including forced labor, environmental pollution, and displacement of communities. In recent years, there has been growing international pressure to hold mining companies accountable for these abuses. Research Aim: This study explores the legal liabilities of mining companies for human rights abuses. The study specifically examines the case of Erdenet Mining Corporation (EMC), a large mining company in Mongolia that has been accused of human rights abuses. Methodology: The study used a mixed-methods approach, which included a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Findings: The study found that mining companies can be held liable for human rights abuses under a variety of regulatory frameworks, including soft law and self-regulatory instruments in the mining industry, international law, national law, and corporate law. The study also found that there are a number of challenges to holding mining companies accountable for human rights abuses, including the lack of effective enforcement mechanisms and the difficulty of proving causation. Theoretical Importance: The study contributes to the growing body of literature on the legal liabilities of mining companies for human rights abuses. The study also provides insights into the challenges of holding mining companies accountable for human rights abuses. Data Collection: The data for the study was collected through a variety of methods, including a review of legal literature, interviews with community members and NGOs, and a case study of EMC. Analysis Procedures: The data was analyzed using a variety of methods, including content analysis, thematic analysis, and case study analysis. Conclusion: The study concludes that mining companies can be held liable for human rights abuses under a variety of legal and regulatory frameworks. There are positive developments in ensuring greater accountability and protection of affected communities and the environment in countries with a strong economy. Regrettably, access to avenues of redress is reasonably low in less developed countries, where the governments have not implemented a robust mechanism to enforce liability requirements in the mining industry. The study recommends that governments and mining companies take more ambitious steps to enhance corporate accountability.Keywords: human rights, human rights abuses, ESG, litigation, Erdenet Mining Corporation, corporate social responsibility, soft law, self-regulation, mining industry, parent company liability, sustainability, environment, UN
Procedia PDF Downloads 801126 History of Recurrent Mucosal Infections and Immune System Disorders Is Related to Complications of Non-infectious Anterior Uveitis
Authors: Barbara Torres Rives
Abstract:
Uveitis. Non-infectious anterior uveitis is a polygenic inflammatory eye disease, and it is suggested that mediated processes by the immune system (autoimmune or not) are the main mechanisms proposed in the pathogenesis of this type of uveitis. A relationship between infectious processes, digestive disorders, and a dysbiosis of the microbiome was recently described. In addition, alterations in the immune response associated with the initiation and progression of the disease have been described. Objective: The aim of this study was to identify factors related to the immune system associated with complicated non-infectious anterior uveitis. Methods: A cross-sectional observational analytical study was carried out. The universe consisted of all patients attending the ocular inflammation service of the Cuban Institute of Ophthalmology Ramón Pando Ferrer. The sample consisted of 213 patients diagnosed with non-infectious anterior uveitis. Results: Of the 213 patients with non-infectious anterior uveitis, the development of ophthalmologic complications predominated 56.3% (p=0.0094). In patients with complications was more frequent the presence of human leukocyte antigen-B27 allele (49.2%) (p<0.0001), decreased immunoglobulin G (24.2%, p=0.0124), increased immunoglobulin A (14.2%, p=0.0024), history of recurrent sepsis (59.2%, p=0.0018), recurrent respiratory infections (44.2%, p=0.0003), digestive alterations (40%, p=0.0013) and spondyloarthropathies (30%, p=0.0314). By logistic regression, it was observed that, for each completed year, the elevated risk for developing complicated non-infectious anterior uveitis in human leukocyte antigen-B27 allele positive patients (OR: 4.22, p=0.000), Conclusions: The control of recurrent sepsis at mucosal level and immunomodulation could prevent complications in non-infectious anterior uveitis. Therefore, the microbiome becomes the target of treatment and prevention of complications in non-infectious anterior uveitis.Keywords: non-infectious anterior uveitis, immune system disorders, recurrent mucosal infections, microbiome
Procedia PDF Downloads 901125 Evaluation of Effectiveness of Three Common Equine Thrush Treatments
Authors: A. S. Strait, J. A. Bryk-Lucy, L. M. Ritchie
Abstract:
Thrush is a common disease of ungulates primarily affecting the frog and sulci, caused by the anaerobic bacteria Fusobacterium necrophorum. Thrush accounts for approximately 45.0% of hoof disorders in horses. Prevention and treatment of thrush are essential to prevent horses from developing severe infections and becoming lame. Proper knowledge of hoof care and thrush treatments is crucial to avoid financial costs, unsoundness and lost training time. Research on the effectiveness of numerous commercial and homemade thrush treatments is limited in the equine industry. The objective of this study was to compare the effectiveness of three common thrush treatments for horses: weekly application of Thrush Buster, daily dilute bleach solution spray, or Metronidazole pastes every other day. Cases of thrush diagnosed by a veterinarian or veterinarian-trained researcher were given a score, from 0 to 4, based on the severity of the thrush in each hoof (n=59) and randomly assigned a treatment. Cases were rescored each week of the three-week treatment, and the final and initial scores were compared to determine effectiveness. The thrush treatments were compared with Thrush Buster as the reference at a significance level of α=.05. Binomial Logistic Regression Modeling was performed, finding that the odds of a hoof treated with Metronidazole to be thrush-free was 6.1 times greater than a hoof treated with Thrush Buster (p=0.001), while the odds of a hoof that was treated with bleach to be thrush-free was only 0.97 times greater than a hoof treated with Thrush Buster (p=0.970), after adjustment for treatment week. Of the three treatments utilized in this study, Metronidazole paste applied to the affected areas every other day was the most effective treatment for thrush in horses. There are many other thrush remedies available, and further research is warranted to determine the efficacy of additional treatment options.Keywords: fusobacterium necrophorum, thrush, equine, horse, lameness
Procedia PDF Downloads 1561124 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 1301123 Social Media Advertising and Acceptability of Fast Moving Consumer Goods in Nigeria’s Manufacturing Industry
Authors: John Akinwumi Makinde
Abstract:
Nigerian manufacturing industry, particularly the fast moving consumer producing firms play vital roles in Nigerian economy. This sector’s product acceptability is given very little attention along with social media advertising that communicate product information to audience across the globe need to be documented. Procter and Gamble Plc operate in Nigeria with appreciable number of fast moving consumer goods that service Nigerian economy. Social media advertising disposition of the company and product acceptability of the company deserve some elucidations. This study therefore examined the impact of social media advertising on product acceptability of FMCG in Nigerian manufacturing industry, using Procter and Gamble Plc as case study. The study employed the case study type of descriptive survey research design. The population consisted of 235 customers of G&P Plc, which were selected through random sampling method. A total of 235 copies of questionnaires titled 'Social Media Advertising and Product Acceptability (SMA-PA) Questionnaire' was administered and retrieved. Data generated were analysed using frequency distribution and regression analysis at 0.05 level. It was found that social media advertising positively and significantly motivated customers to buy product of P&G Plc (r =.147**, N= 235, p(.000) < .01). Findings also showed that social media advertising has significant impact on product acceptability of FCMG in P&G Plc (F(2,61)=22.250; R2=.629; P(.000) < .05). The study concluded that social media advertising is a determinant factor of consumer decision to accept fast moving consumer goods in Nigerian manufacturing industry. It is recommended that with the growing market of FMCG, there is need to educate the market with the product unique features, standard and quality on social media. Finally, Fast Moving Consumer Goods firms should deploy excellent marketing mix on social media.Keywords: advertising, fast moving consumer goods, manufacturing industry, product acceptability, social media
Procedia PDF Downloads 3141122 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 951121 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions
Authors: T. Elrazaz
Abstract:
This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets
Procedia PDF Downloads 1151120 A Retrospective Cross-Sectional Study on the Prevalence and Factors Associated with Virological Non-Suppression among HIV-Positive Adult Patients on Antiretroviral Therapy in Woliso Town, Oromia, Ethiopia
Authors: Teka Haile, Behailu Hawulte, Solomon Alemayehu
Abstract:
Background: HIV virological failure still remains a problem in HV/AIDS treatment and care. This study aimed to describe the prevalence and identify the factors associated with viral non-suppression among HIV-positive adult patients on antiretroviral therapy in Woliso Town, Oromia, Ethiopia. Methods: A retrospective cross-sectional study was conducted among 424 HIV-positive patient’s attending antiretroviral therapy (ART) in Woliso Town during the period from August 25, 2020 to August 30, 2020. Data collected from patient medical records were entered into Epi Info version 2.3.2.1 and exported to SPSS version 21.0 for analysis. Logistic regression analysis was done to identify factors associated with viral load non-suppression, and statistical significance of odds ratios were declared using 95% confidence interval and p-value < 0.05. Results: A total of 424 patients were included in this study. The mean age (± SD) of the study participants was 39.88 (± 9.995) years. The prevalence of HIV viral load non-suppression was 55 (13.0%) with 95% CI (9.9-16.5). Second-line ART treatment regimen (Adjusted Odds Ratio (AOR) = 8.98, 95% Confidence Interval (CI): 2.64, 30.58) and routine viral load testing (AOR = 0.01, 95% CI: 0.001, 0.02) were significantly associated with virological non-suppression. Conclusion: Virological non-suppression was high, which hinders the achievement of the third global 95 target. The second-line regimen and routine viral load testing were significantly associated with virological non-suppression. It suggests the need to assess the effectiveness of antiretroviral drugs for epidemic control. It also clearly shows the need to decentralize third-line ART treatment for those patients in need.Keywords: virological non-suppression, HIV-positive, ART, Woliso town, Ethiopia
Procedia PDF Downloads 1501119 Urine Neutrophil Gelatinase-Associated Lipocalin as an Early Marker of Acute Kidney Injury in Hematopoietic Stem Cell Transplantation Patients
Authors: Sara Ataei, Maryam Taghizadeh-Ghehi, Amir Sarayani, Asieh Ashouri, Amirhossein Moslehi, Molouk Hadjibabaie, Kheirollah Gholami
Abstract:
Background: Acute kidney injury (AKI) is common in hematopoietic stem cell transplantation (HSCT) patients with an incidence of 21–73%. Prevention and early diagnosis reduces the frequency and severity of this complication. Predictive biomarkers are of major importance to timely diagnosis. Neutrophil gelatinase associated lipocalin (NGAL) is a widely investigated novel biomarker for early diagnosis of AKI. However, no study assessed NGAL for AKI diagnosis in HSCT patients. Methods: We performed further analyses on gathered data from our recent trial to evaluate the performance of urine NGAL (uNGAL) as an indicator of AKI in 72 allogeneic HSCT patients. AKI diagnosis and severity were assessed using Risk–Injury–Failure–Loss–End-stage renal disease and AKI Network criteria. We assessed uNGAL on days -6, -3, +3, +9 and +15. Results: Time-dependent Cox regression analysis revealed a statistically significant relationship between uNGAL and AKI occurrence. (HR=1.04 (1.008-1.07), P=0.01). There was a relation between uNGAL day +9 to baseline ratio and incidence of AKI (unadjusted HR=.1.047(1.012-1.083), P<0.01). The area under the receiver-operating characteristic curve for day +9 to baseline ratio was 0.86 (0.74-0.99, P<0.01) and a cut-off value of 2.62 was 85% sensitive and 83% specific in predicting AKI. Conclusions: Our results indicated that increase in uNGAL augmented the risk of AKI and the changes of day +9 uNGAL concentrations from baseline could be of value for predicting AKI in HSCT patients. Additionally uNGAL changes preceded serum creatinine rises by nearly 2 days.Keywords: acute kidney injury, hemtopoietic stem cell transplantation, neutrophil gelatinase-associated lipocalin, Receiver-operating characteristic curve
Procedia PDF Downloads 4091118 Examination of the South African Fire Legislative Framework
Authors: Mokgadi Julia Ngoepe-Ntsoane
Abstract:
The article aims to make a case for a legislative framework for the fire sector in South Africa. Robust legislative framework is essential for empowering those with obligatory mandate within the sector. This article contributes to the body of knowledge in the field of policy reviews particularly with regards to the legal framework. It has been observed overtime that the scholarly contributions in this field are limited. Document analysis was the methodology selected for the investigation of the various legal frameworks existing in the country. It has been established that indeed the national legislation on the fire industry does not exist in South Africa. From the documents analysed, it was revealed that the sector is dominated by cartels who are exploiting the new entrants to the market particularly SMEs. It is evident that these cartels are monopolising the system as they have long been operating in the system turning it into self- owned entities. Commitment to addressing the challenges faced by fire services and creating a framework for the evolving role that fire brigade services are expected to execute in building safer and sustainable communities is vital. Legislation for the fire sector ought to be concluded with immediate effect. The outdated national fire legislation has necessitated the monopolisation and manipulation of the system by dominating organisations which cause a painful discrimination and exploitation of smaller service providers to enter the market for trading in that occupation. The barrier to entry bears long term negative effects on national priority areas such as employment creation, poverty, and others. This monopolisation and marginalisation practices by cartels in the sector calls for urgent attention by government because if left attended, it will leave a lot of people particularly women and youth being disadvantaged and frustrated. The downcast syndrome exercised within the fire sector has wreaked havoc and is devastating. This is caused by cartels that have been within the sector for some time, who know the strengths and weaknesses of processes, shortcuts, advantages and consequences of various actions. These people take advantage of new entrants to the sector who in turn find it difficult to manoeuvre, find the market dissonant and end up giving up their good ideas and intentions. There are many pieces of legislation which are industry specific such as housing, forestry, agriculture, health, security, environmental which are used to regulate systems within the institutions involved. Other regulations exist as bi-laws for guiding the management within the municipalities.Keywords: sustainable job creation, growth and development, transformation, risk management
Procedia PDF Downloads 1751117 Effects of Audit Quality and Corporate Governance on Earnings Management of Quoted Deposit Money Banks in Nigeria
Authors: Joel S. Akintayo, Ramat T. Salman
Abstract:
The stakeholders’ pressure on corporate managers to maintain firm’s profitability has created economic incentives for management to engage in earnings management practices. Therefore, this study examines the effects of audit quality and corporate governance on earnings management of quoted deposit money banks (DMBs) in Nigeria. This study specifically investigates the influence of audit tenure, audit fee, board independence, and board size on earnings management of DMBs. Explanatory research design was employed in carrying out the study while secondary data were sourced from the annual reports and accounts of all the 15 quoted DMBs in Nigerian Stock Exchange as at December 31, 2015 for a period of 10 years covering from 2006 to 2015. The data obtained for the study were analyzed using panel regression analysis approach. The findings reveal that board independence has a negative significant effect on earnings management at a 5% level of significance (p=0.002), while audit fee has a positive significant effect on earnings management at a 5% level of significance (p=0.013) and audit tenure has a negative significant effect on earnings management of DMBs at a 5% level of significance (p=0.003). Surprisingly, board size was statistically not significant at a 5% level of significance (p=0.086). The study concludes that high audit quality and sound corporate governance could improve the earnings quality of DMBs. Hence, the study recommends that the authorities saddled with the responsibility of banking supervision in Nigeria such the Securities and Exchange Commission (SEC) and CBN to advise the National Assembly in Nigeria to pass into law the three years professional requirement for audit tenure.Keywords: audit quality, audit tenure, audit fee, board independence, corporate governance, earnings management
Procedia PDF Downloads 1981116 Multidisciplinarity, Interdisciplinarity and Transdisciplinarity in Peace Education and Peace Studies: A Content Analysis
Authors: Frances Bernard Kominkiewicz
Abstract:
Demonstrating the ability to build social justice and peace is integral in undergraduate and graduate education. Many disciplines are involved in peace education and peace studies, and the collaboration of those disciplines are examined in this paper. To the author’s best knowledge, no content analysis research previously existed regarding peace studies and peace education from a multidisciplinarity, interdisciplinarity, and transdisciplinarity perspective. Peacebuilding is taught through these approaches, which adds to the depth, breadth, and richness of peace education and peace studies. This paper presents a content analysis of academic peace studies programs and course descriptions. Variables studied include contributions and foci of disciplines in peace studies programs and students’ engagement in community peacebuilding. The social work discipline, for example, focuses on social and economic justice as one of the nine competencies that undergraduate and graduate students must attain before earning a Bachelor of Social Work degree or a Master of Social Work degree and becoming social work practitioners. Demonstrating the ability to build social justice and peace is integral in social work education. Peacebuilding is taught through such social work courses as conflict resolution, and social work practice with communities and organizations, and these courses are examined in this research through multidisciplinarity, interdisciplinarity, and transdisciplinarity approach. Peace and social justice are linked terms in various fields, including social work. Social justice is of paramount importance in social work programs, and social workers are trained to advocate for human rights and social, economic, and environmental justice. Social workers use knowledge of oppression, globally as well as nationally, in the practice of peace education and peace studies. Social work is at the forefront in advocating for social justice as a discipline and joins with other educators in strengthening the peacebuilding opportunities for students. The content analysis, conducted through a random sample of peace studies and peace education university and college programs in the United States, found that although courses teach the concepts of peace education and peace studies, courses often are not given these titles in the social work discipline. Therefore, this analysis also includes a discussion of the multidisciplinarity, interdisciplinarity, and transdisciplinarity approach to peace education, peace studies, and peacebuilding and the importance of these approaches in educating students about peace. The content analysis further found great variability in the number of disciplines involved in peace studies programs, the focus of those disciplines in peace education, the placement of peace studies and peace education within the university or college, and the number of courses and concentrations available in peace studies and peace education. In conclusion, the research points toward very robust and diverse approaches to peace education with opportunities for further research and discussion.Keywords: content analysis, interdisciplinarity, multidisciplinarity, peace education programs
Procedia PDF Downloads 1551115 Tasting Terroir: A Gourmet Adventure in Food and Wine Tourism
Authors: Sunita Boro, Saurabh Kumar Dixit
Abstract:
Terroir, an intricate fusion of geography, climate, soil, and human expertise, has long been acknowledged as a defining factor in the character of wines and foods. This research embarks on an exploration of terroir's profound influence on gastronomic tourism, shedding light on the intricate interplay between the physical environment and culinary artistry. Delving into the intricate science of terroir, we scrutinize its role in shaping the sensory profiles of wines and foods, emphasizing the profound impact of specific regions on flavor, aroma, and texture. We deploy a multifaceted methodology, amalgamating sensory analysis, chemical profiling, geographical information systems, and qualitative interviews to unearth the nuanced nuances of terroir expression. Through an exhaustive review of the literature, we elucidate the historical roots of terroir, unveil the intricate cultural dimensions shaping it, and provide a comprehensive examination of prior studies in the field. Our findings underscore the pivotal role of terroir in promoting regional identities, enhancing the economic viability of locales, and attracting gastronomic tourists. The paper also dissects the marketing strategies employed to promote terroir-driven food and wine experiences. We elucidate the utilization of storytelling, branding, and collaborative endeavors in fostering a robust terroir-based tourism industry. This elucidates both the potential for innovation and the challenges posed by oversimplification or misrepresentation of terroir. Our research spotlights the intersection of terroir and sustainability, emphasizing the significance of environmentally conscious practices in terroir-driven productions. We discern the harmonious relationship between sustainable agriculture, terroir preservation, and responsible tourism, encapsulating the essence of ecological integrity in gastronomic tourism. Incorporating compelling case studies of regions and businesses excelling in the terroir-based tourism realm, we offer in-depth insights into successful models and strategies, with an emphasis on their replicability and adaptability to various contexts. Ultimately, this paper not only contributes to the scholarly understanding of terroir's role in the world of food and wine tourism but also provides actionable recommendations for stakeholders to leverage terroir's allure, preserve its authenticity, and foster sustainable and enriching culinary tourism experiences.Keywords: terroir, food tourism, wine tourism, sustainability
Procedia PDF Downloads 601114 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean
Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama
Abstract:
Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.Keywords: coral environment, marine microbiology, network analysis, omics data analysis
Procedia PDF Downloads 2541113 Diabetes Prevalence and Quality of Life of Female Nursing Students in Riyadh
Authors: Alyaa Farouk AbdelFattah Ibrahim, Agnes Monica, Dolores I. Cabansag
Abstract:
The prevalence of diabetes mellitus is reaching epidemic proportions in many parts of the world causing an increasing public health concern. Cases of Type 2 diabetes are rapidly increasing in the Middle East region. Deprived of lifestyle deviations, a section of the Middle East’s inhabitants will be pretentious by 2035. As all sociocultural factors have created unhealthy lifestyles, which have become part of the social norms within Saudi society, thereby increased the prevalence of sedentary lifestyle and obesity in women living in Saudi Arabia. So, this study aimed to assess the impact of diabetes mellitus on quality of life of female nursing students in King Saud bin Abdulaziz University for Health Sciences, Riyadh. In a crossectional study design, 151 nursing students at King Saud bin Abdulaziz University for health sciences in Riyadh were included in the study. Biosociodemographic questionnaire and Short-Form 36 (SF-36) Health Related Quality of life Survey Arabic version were used for data collection, and all included students were screened for random blood glucose level. Results depicted that among 151 subjects included in the study 17 (11.3%) had diagnosed medical problems, and 29.4% of those participants with medical problems were diabetics. Univariate regression model for the relation between diabetes mellitus and overall percent score of SF-36 health survey domains showed no statistically significant difference between diabetic and non-diabetic subjects 0.990(0.931-1.053). In conclusion, although the diabetes prevalence was high among the study subjects it did not affect their quality of life may be due to age of the study population.Keywords: diabetes mellitus, diabetes prevalence, quality of life, university students' health
Procedia PDF Downloads 1811112 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 961111 Interaction of Racial and Gender Disparities in Salivary Gland Cancer Survival in the United States: A Surveillance Epidemiology and End Results Study
Authors: Sarpong Boateng, Rohit Balasundaram, Akua Afrah Amoah
Abstract:
Introduction: Racial and Gender disparities have been found to be independently associated with Salivary Gland Cancers (SGCs) survival; however, to our best knowledge, there are no previous studies on the interplay of these social determinants on the prognosis of SGCs. The objective of this study was to examine the joint effect of race and gender on the survival of SGCs. Methods: We analyzed survival outcomes of 13,547 histologically confirmed cases of SGCs using the Surveillance Epidemiology and End Results (SEER) database (2004 to 2015). Multivariable Cox regression analysis and Kaplan-Meier curves were used to estimate hazard ratios (HR) after controlling for age, tumor characteristics, treatment type and year of diagnosis. Results: 73.5% of the participants were whites, 8.5% were blacks, 10.1% were Hispanics and 58.5% were males. Overall, males had poorer survival than females (HR = 1.16, p=0.003). In the adjusted multivariable model, there were no significant differences in survival by race. However, the interaction of gender and race was statistically significant (p=0.01) in Hispanic males. Thus, compared to White females (reference), Hispanic females had significantly better survival (HR=0.53), whiles Hispanic males had worse survival outcomes (HR=1.82) for SGCs. Conclusions: Our results show significant interactions between race and gender, with racial disparities varying across the different genders for SGCs survival. This study indicates that racial and gender differences are crucial factors to be considered in the prognostic counseling and management of patients with SGCs. Biologic factors, tumor genetic characteristics, chemotherapy, lifestyle, environmental exposures, and socioeconomic and dietary factors are potential yet proven reasons that could account for racial and gender differences in the survival of SGCs.Keywords: salivary, cancer, survival, disparity, race, gender, SEER
Procedia PDF Downloads 2011110 Weighing the Economic Cost of Illness Due to Dysentery and Cholera Triggered by Poor Sanitation in Rural Faisalabad, Pakistan
Authors: Syed Asif Ali Naqvi, Muhammad Azeem Tufail
Abstract:
Inadequate sanitation causes direct costs of treating illnesses and loss of income through reduced productivity. This study estimated the economic cost of health (ECH) due to poor sanitation and factors determining the lack of access to latrine for the rural, backward hamlets and slums of district Faisalabad, Pakistan. Cross sectional data were collected and analyzed for the study. As the population under study was homogenous in nature, it is why a simple random sampling technique was used for the collection of data. Data of 440 households from 4 tehsils were gathered. The ordinary least square (OLS) model was used for health cost analysis, and the Probit regression model was employed for determining the factors responsible for inaccess to toilets. The results of the study showed that condition of toilets, situation of sewerage system, access to adequate sanitation, Cholera, diarrhea and dysentery, Water and Sanitation Agency (WASA) maintenance, source of medical treatment can plausibly have a significant connection with the dependent variable. Outcomes of the second model showed that the variables of education, family system, age, and type of dwelling have positive and significant sway with the dependent variable. Variable of age depicted an insignificant association with access to toilets. Variable of monetary expenses would negatively influence the dependent variable. Findings revealed the fact, health risks are often exacerbated by inadequate sanitation, and ultimately, the cost on health also surges. Public and community toilets for youths and social campaigning are suggested for public policy.Keywords: sanitation, toilet, economic cost of health, water, Punjab
Procedia PDF Downloads 1201109 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 52