Search results for: PieceWise Affine Auto Regression with eXogenous input
232 Contributory Antioxidant Role of Testosterone and Oxidative Stress Biomarkers in Males Exposed to Mixed Chemicals in an Automobile Repair Community
Authors: Saheed A. Adekola, Mabel A. Charles-Davies, Ridwan A. Adekola
Abstract:
Background: Testosterone is a known androgenic and anabolic steroid, primarily secreted in the testes. It plays an important role in the development of testes and prostate and has a range of biological actions. There is evidence that exposure to mixed chemicals in the workplace leads to the generation of free radicals and inadequate antioxidants leading to oxidative stress, which may serve as an early indicator of a pathophysiologic state. Based on findings, testosterone shows direct antioxidant effects by increasing the activities of antioxidant enzymes like glutathione peroxidase, thus indirectly contributing to antioxidant capacity. Objective: To evaluate the antioxidant role of testosterone as well as the relationship between testosterone and oxidative stress biomarkers in males exposed to mixed chemicals in the automobile repair community. Methods: The study included 43 participants aged 22- 60years exposed to mixed chemicals (EMC) from the automobile repair community. Forty (40) apparently healthy, unexposed, age-matched controls were recruited after informed consent. Demographic, sexual and anthropometric characteristics were obtained from pre-test structured questionnaires using standard methods. Blood samples (10mls) were collected from each subject into plain bottles and sera obtained were used for biochemical analyses. Serum levels of testosterone and luteinizing hormone (LH) were determined by enzyme immunoassay method, EIA (Immunometrics UK.LTD). Levels of total antioxidant capacity (TAC), total plasma peroxide (TPP), Malondialdehyde (MDA), hydrogen peroxide (H2O2), glutathione peroxide (GPX), superoxide dismutase (SOD), glutathione-S-transferase (GST), and reduced glutathione (GSH) were determined using spectrophotometric methods respectively. Results obtained were analyzed using the Student’s t-test and Chi-square test for quantitative variables and qualitative variables respectively. Multiple regression was used to find associations and relationships between the variables. Results: Significant higher concentrations of TPP, MDA, OSI, H2O2 and GST were observed in EMC compared with controls (p < 0.001). Within EMC, significantly higher levels of testosterone, LH and TAC were observed in eugonadic when compared with hypogonadic participants (p < 0.001). Diastolic blood pressure, waist circumference, waist height ratio and waist hip ratio were significantly higher in participants EMC compared with the controls. Sexual history and dietary intake showed that the controls had normal erection during sex and took more vegetables in their diet which may therefore be beneficial. Conclusion: The significantly increased levels of total antioxidant capacity in males exposed to mixed chemicals despite their exposure may probably reflect the contributory antioxidant role testosterone that prevents oxidative stress.Keywords: mixed chemicals, oxidative stress, antioxidant, hypogonadism testosterone
Procedia PDF Downloads 145231 Partisan Agenda Setting in Digital Media World
Authors: Hai L. Tran
Abstract:
Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization
Procedia PDF Downloads 60230 Catalytic Ammonia Decomposition: Cobalt-Molybdenum Molar Ratio Effect on Hydrogen Production
Authors: Elvis Medina, Alejandro Karelovic, Romel Jiménez
Abstract:
Catalytic ammonia decomposition represents an attractive alternative due to its high H₂ content (17.8% w/w), a product stream free of COₓ, among others; however, challenges need to be addressed for its consolidation as an H₂ chemical storage technology, especially, those focused on the synthesis of efficient bimetallic catalytic systems, as an alternative to the price and scarcity of ruthenium, the most active catalyst reported. In this sense, from the perspective of rational catalyst design, adjusting the main catalytic activity descriptor, a screening of supported catalysts with different compositional settings of cobalt-molybdenum metals is presented to evaluate their effect on the catalytic decomposition rate of ammonia. Subsequently, a kinetic study on the supported monometallic Co and Mo catalysts, as well as on the bimetallic CoMo catalyst with the highest activity is shown. The synthesis of catalysts supported on γ-alumina was carried out using the Charge Enhanced Dry Impregnation (CEDI) method, all with a 5% w/w loading metal. Seeking to maintain uniform dispersion, the catalysts were oxidized and activated (In-situ activation) using a flow of anhydrous air and hydrogen, respectively, under the same conditions: 40 ml min⁻¹ and 5 °C min⁻¹ from room temperature to 600 °C. Catalytic tests were carried out in a fixed-bed reactor, confirming the absence of transport limitations, as well as an Approach to equilibrium (< 1 x 10⁻⁴). The reaction rate on all catalysts was measured between 400 and 500 ºC at 53.09 kPa NH3. The synergy theoretically (DFT) reported for bimetallic catalysts was confirmed experimentally. Specifically, it was observed that the catalyst composed mainly of 75 mol% cobalt proved to be the most active in the experiments, followed by the monometallic cobalt and molybdenum catalysts, in this order of activity as referred to in the literature. A kinetic study was performed at 10.13 – 101.32 kPa NH3 and at four equidistant temperatures between 437 and 475 °C the data were adjusted to an LHHW-type model, which considered the desorption of nitrogen atoms from the active phase surface as the rate determining step (RDS). The regression analysis were carried out under an integral regime, using a minimization algorithm based on SLSQP. The physical meaning of the parameters adjusted in the kinetic model, such as the RDS rate constant (k₅) and the lumped adsorption constant of the quasi-equilibrated steps (α) was confirmed through their Arrhenius and Van't Hoff-type behavior (R² > 0.98), respectively. From an energetic perspective, the activation energy for cobalt, cobalt-molybdenum, and molybdenum was 115.2, 106.8, and 177.5 kJ mol⁻¹, respectively. With this evidence and considering the volcano shape described by the ammonia decomposition rate in relation to the metal composition ratio, the synergistic behavior of the system is clearly observed. However, since characterizations by XRD and TEM were inconclusive, the formation of intermetallic compounds should be still verified using HRTEM-EDS. From this point onwards, our objective is to incorporate parameters into the kinetic expressions that consider both compositional and structural elements and explore how these can maximize or influence H₂ production.Keywords: CEDI, hydrogen carrier, LHHW, RDS
Procedia PDF Downloads 58229 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India
Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha
Abstract:
Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.Keywords: 2D ERT, landslide, safety factor, slope stability
Procedia PDF Downloads 320228 Validating Chronic Kidney Disease-Specific Risk Factors for Cardiovascular Events Using National Data: A Retrospective Cohort Study of the Nationwide Inpatient Sample
Authors: Fidelis E. Uwumiro, Chimaobi O. Nwevo, Favour O. Osemwota, Victory O. Okpujie, Emeka S. Obi, Omamuyovbi F. Nwoagbe, Ejiroghene Tejere, Joycelyn Adjei-Mensah, Christopher N. Ekeh, Charles T. Ogbodo
Abstract:
Several risk factors associated with cardiovascular events have been identified as specific to Chronic Kidney Disease (CKD). This study endeavors to validate these CKD-specific risk factors using up-to-date national-level data, thereby highlighting the crucial significance of confirming the validity and generalizability of findings obtained from previous studies conducted on smaller patient populations. The study utilized the nationwide inpatient sample database to identify adult hospitalizations for CKD from 2016 to 2020, employing validated ICD-10-CM/PCS codes. A comprehensive literature review was conducted to identify both traditional and CKD-specific risk factors associated with cardiovascular events. Risk factors and cardiovascular events were defined using a combination of ICD-10-CM/PCS codes and statistical commands. Only risk factors with specific ICD-10 codes and hospitalizations with complete data were included in the study. Cardiovascular events of interest included cardiac arrhythmias, sudden cardiac death, acute heart failure, and acute coronary syndromes. Univariate and multivariate regression models were employed to evaluate the association between chronic kidney disease-specific risk factors and cardiovascular events while adjusting for the impact of traditional CV risk factors such as old age, hypertension, diabetes, hypercholesterolemia, inactivity, and smoking. A total of 690,375 hospitalizations for CKD were included in the analysis. The study population was predominantly male (375,564, 54.4%) and primarily received care at urban teaching hospitals (512,258, 74.2%). The mean age of the study population was 61 years (SD 0.1), and 86.7% (598,555) had a CCI of 3 or more. At least one traditional risk factor for CV events was present in 84.1% of all hospitalizations (580,605), while 65.4% (451,505) included at least one CKD-specific risk factor for CV events. The incidence of CV events in the study was as follows: acute coronary syndromes (41,422; 6%), sudden cardiac death (13,807; 2%), heart failure (404,560; 58.6%), and cardiac arrhythmias (124,267; 18%). 91.7% (113,912) of all cardiac arrhythmias were atrial fibrillations. Significant odds of cardiovascular events on multivariate analyses included: malnutrition (aOR: 1.09; 95% CI: 1.06–1.13; p<0.001), post-dialytic hypotension (aOR: 1.34; 95% CI: 1.26–1.42; p<0.001), thrombophilia (aOR: 1.46; 95% CI: 1.29–1.65; p<0.001), sleep disorder (aOR: 1.17; 95% CI: 1.09–1.25; p<0.001), and post-renal transplant immunosuppressive therapy (aOR: 1.39; 95% CI: 1.26–1.53; p<0.001). The study validated malnutrition, post-dialytic hypotension, thrombophilia, sleep disorders, and post-renal transplant immunosuppressive therapy, highlighting their association with increased risk for cardiovascular events in CKD patients. No significant association was observed between uremic syndrome, hyperhomocysteinemia, hyperuricemia, hypertriglyceridemia, leptin levels, carnitine deficiency, anemia, and the odds of experiencing cardiovascular events.Keywords: cardiovascular events, cardiovascular risk factors in CKD, chronic kidney disease, nationwide inpatient sample
Procedia PDF Downloads 82227 Perceived Procedural Justice and Organizational Citizenship Behavior: Evidence from a Security Organization
Authors: Noa Nelson, Orit Appel, Rachel Ben-ari
Abstract:
Organizational Citizenship Behavior (OCB) is voluntary employee behavior that contributes to the organization beyond formal job requirements. It can take different forms, such as helping teammates (OCB toward individuals; hence, OCB-I), or staying after hours to attend a task force (OCB toward the organization; hence, OCB-O). Generally, OCB contributes substantially to organizational climate, goals, productivity, and resilience, so organizations need to understand what encourages it. This is particularly challenging in security organizations. Security work is characterized by high levels of stress and burnout, which is detrimental to OCB, and security organizational design emphasizes formal rules and clear hierarchies, leaving employees with less freedom for voluntary behavior. The current research explored the role of Perceived Procedural Justice (PPJ) in enhancing OCB in a security organization. PPJ refers to how fair decision-making processes are perceived to be. It involves the sense that decision makers are objective, attentive to everyone's interests, respectful in their communications and participatory - allowing individuals a voice in decision processes. Justice perceptions affect motivation, and it was specifically suggested that PPJ creates an attachment to one's organization and personal interest in its success. Accordingly, PPJ had been associated with OCB, but hardly any research tested their association with security organizations. The current research was conducted among prison guards in the Israel Prison Service, to test a correlational and a causal association between PPJ and OCB. It differentiated between perceptions of direct commander procedural justice (CPJ), and perceptions of organization procedural justice (OPJ), hypothesizing that CPJ would relate to OCB-I, while OPJ would relate to OCB-O. In the first study, 336 prison guards (305 male) from 10 different prisons responded to questionnaires measuring their own CPJ, OPJ, OCB-I, and OCB-O. Hierarchical linear regression analyses indicated the significance of commander procedural justice (CPJ): It associated with OCB-I and also associated with OPJ, which, in turn, associated with OCB-O. The second study tested CPJ's causal effects on prison guards' OCB-I and OCB-O; 311 prison guards (275 male) from 14 different prisons read scenarios that described either high or low CPJ, and then evaluated the likelihood of that commander's prison guards performing OCB-I and OCB-O. In this study, CPJ enhanced OCB-O directly. It also contributed to OCB-I, indirectly: CPJ enhanced the motivation for collaboration with the commander, which respondents also evaluated after reading scenarios. Collaboration, in turn, associated with OCB-I. The studies demonstrate that procedural justice, especially commander's PJ, promotes OCB in security work environments. This is important because extraordinary teamwork and motivation are needed to deal with emergency situations and with delicate security challenges. Following the studies, the Israel Prison Service implemented personal procedural justice training for commanders and unit level programs for procedurally just decision processes. From a theoretical perspective, the studies extend the knowledge on PPJ and OCB to security work environments and contribute evidence on PPJ's causal effects. They also call for further research, to understand the mechanisms through which different types of PPJ affect different types of OCB.Keywords: organizational citizenship behavior, perceived procedural justice, prison guards, security organizations
Procedia PDF Downloads 221226 Foodborne Outbreak Calendar: Application of Time Series Analysis
Authors: Ryan B. Simpson, Margaret A. Waskow, Aishwarya Venkat, Elena N. Naumova
Abstract:
The Centers for Disease Control and Prevention (CDC) estimate that 31 known foodborne pathogens cause 9.4 million cases of these illnesses annually in US. Over 90% of these illnesses are associated with exposure to Campylobacter, Cryptosporidium, Cyclospora, Listeria, Salmonella, Shigella, Shiga-Toxin Producing E.Coli (STEC), Vibrio, and Yersinia. Contaminated products contain parasites typically causing an intestinal illness manifested by diarrhea, stomach cramping, nausea, weight loss, fatigue and may result in deaths in fragile populations. Since 1998, the National Outbreak Reporting System (NORS) has allowed for routine collection of suspected and laboratory-confirmed cases of food poisoning. While retrospective analyses have revealed common pathogen-specific seasonal patterns, little is known concerning the stability of those patterns over time and whether they can be used for preventative forecasting. The objective of this study is to construct a calendar of foodborne outbreaks of nine infections based on the peak timing of outbreak incidence in the US from 1996 to 2017. Reported cases were abstracted from FoodNet for Salmonella (135115), Campylobacter (121099), Shigella (48520), Cryptosporidium (21701), STEC (18022), Yersinia (3602), Vibrio (3000), Listeria (2543), and Cyclospora (758). Monthly counts were compiled for each agent, seasonal peak timing and peak intensity were estimated, and the stability of seasonal peaks and synchronization of infections was examined. Negative Binomial harmonic regression models with the delta-method were applied to derive confidence intervals for the peak timing for each year and overall study period estimates. Preliminary results indicate that five infections continue to lead as major causes of outbreaks, exhibiting steady upward trends with annual increases in cases ranging from 2.71% (95%CI: [2.38, 3.05]) in Campylobacter, 4.78% (95%CI: [4.14, 5.41]) in Salmonella, 7.09% (95%CI: [6.38, 7.82]) in E.Coli, 7.71% (95%CI: [6.94, 8.49]) in Cryptosporidium, and 8.67% (95%CI: [7.55, 9.80]) in Vibrio. Strong synchronization of summer outbreaks were observed, caused by Campylobacter, Vibrio, E.Coli and Salmonella, peaking at 7.57 ± 0.33, 7.84 ± 0.47, 7.85 ± 0.37, and 7.82 ± 0.14 calendar months, respectively, with the serial cross-correlation ranging 0.81-0.88 (p < 0.001). Over 21 years, Listeria and Cryptosporidium peaks (8.43 ± 0.77 and 8.52 ± 0.45 months, respectively) have a tendency to arrive 1-2 weeks earlier, while Vibrio peaks (7.8 ± 0.47) delay by 2-3 weeks. These findings will be incorporated in the forecast models to predict common paths of the spread, long-term trends, and the synchronization of outbreaks across etiological agents. The predictive modeling of foodborne outbreaks should consider long-term changes in seasonal timing, spatiotemporal trends, and sources of contamination.Keywords: foodborne outbreak, national outbreak reporting system, predictive modeling, seasonality
Procedia PDF Downloads 130225 Effect of Energy Management Practices on Sustaining Competitive Advantage among Manufacturing Firms: A Case of Selected Manufacturers in Nairobi, Kenya
Authors: Henry Kiptum Yatich, Ronald Chepkilot, Aquilars Mutuku Kalio
Abstract:
Studies on energy management have focused on environmental conservation, reduction in production and operation expenses. However, transferring gains of energy management practices to competitive advantage is importance to manufacturers in Kenya. Success in managing competitive advantage arises out of a firm’s ability in identifying and implementing actions that can give the company an edge over its rivals. Manufacturing firms in Kenya are the highest consumers of both electricity and petroleum products. In this regard, the study posits that transfer of the gains of energy management practices to competitive advantage is imperative. The study was carried in Nairobi and its environs, which hosts the largest number of manufacturers. The study objectives were; to determine the level of implementing energy management regulations on sustaining competitive advantage, to determine the level of implementing company energy management policy on competitive advantage, to examine the level of implementing energy efficient technology on sustaining competitive advantage, and to assess the percentage energy expenditure on sustaining competitive advantage among manufacturing firms. The study adopted a survey research design, with a study population of 145,987. A sample of 384 respondents was selected randomly from 21 proportionately selected firms. Structured questionnaires were used to collect data. Data analysis was done using descriptive statistics (mean and standard deviations) and inferential statistics (correlation, regression, and T-test). Data is presented using tables and diagrams. The study found that Energy Management Regulations, Company Energy Management Policies, and Energy Expenses are significant predictors of Competitive Advantage (CA). However, Energy Efficient Technology as a component of Energy Management Practices did not have a significant relationship with Competitive Advantage. The study revealed that the level of awareness in the sector stood at 49.3%. Energy Expenses in the sector stood at an average of 10.53% of the firm’s total revenue. The study showed that gains from energy efficiency practices can be transferred to competitive strategies so as to improve firm competitiveness. The study recommends that manufacturing firms should consider energy management practices as part of its strategic agenda in assessing and reviewing their energy management practices as possible strategies for sustaining competitiveness. The government agencies such as Energy Regulatory Commission, the Ministry of Energy and Petroleum, and Kenya Association of Manufacturers should enforce the energy management regulations 2012, and with enhanced stakeholder involvement and sensitization so as promote sustenance of firm competitiveness. Government support in providing incentives and rebates for acquisition of energy efficient technologies should be pursued. From the study limitation, future experimental and longitudinal studies need to be carried out. It should be noted that energy management practices yield enormous benefits to all stakeholders and that the practice should not be considered a competitive tool but rather as a universal practice.Keywords: energy, efficiency, management, guidelines, policy, technology, competitive advantage
Procedia PDF Downloads 384224 Numerical Modeling of Timber Structures under Varying Humidity Conditions
Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan
Abstract:
Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber
Procedia PDF Downloads 246223 Television Sports Exposure and Rape Myth Acceptance: The Mediating Role of Sexual Objectification of Women
Authors: Sofia Mariani, Irene Leo
Abstract:
The objective of the present study is to define the mediating role of attitudes that objectify and devalue women (hostile sexism, benevolent sexism, and sexual objectification of women) in the indirect correlation between exposure to televised sports and acceptance of rape myths. A second goal is to contribute to research on the topic by defining the role of mediators in exposure to different types of sports, following the traditional gender classification of sports. Data collection was carried out by means of an online questionnaire, measuring television sport exposure, sport type, hostile sexism, benevolent sexism, and sexual objectification of women. Data analysis was carried out using IBM SPSS software. The model used was created using Ordinary Least Squares (OLS) regression path analysis. The predictor variable in the model was television sports exposure, the outcome was rape myths acceptance, and the mediators were (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. Correlation analyses were carried out dividing by sport type and controlling for the participants’ gender. As seen in existing literature, television sports exposure was found to be indirectly and positively related to rape myth acceptance through the mediating role of: (1) hostile sexism, (2) benevolent sexism, and (3) sexual objectification of women. The type of sport watched influenced the role of the mediators: hostile sexism was found to be the common mediator to all sports type, exposure to traditionally considered feminine or neutral sports showed the additional mediation effect of sexual objectification of women. In line with existing literature, controlling for gender showed that the only significant mediators were hostile sexism for male participants and benevolent sexism for female participants. Given the prevalence of men among the viewers of traditionally considered masculine sports, the correlation between television sports exposure and rape myth acceptance through the mediation of hostile sexism is likely due to the gender of the participants. However, this does not apply to the viewers of traditionally considered feminine and neutral sports, as this group is balanced in terms of gender and shows a unique mediation: the correlation between television sports exposure and rape myth acceptance is mediated by both hostile sexism and sexual objectification. Given that hostile sexism is defined as hostility towards women who oppose or fail to conform to traditional gender roles, these findings confirm that sport is perceived as a non-traditional activity for women. Additionally, these results imply that the portrayal of women in traditionally considered feminine and neutral sports - which are defined as such because of their aesthetic characteristics - may have a strong component of sexual objectification of women. The present research contributes to defining the association between sports exposure and rape myth acceptance through the mediation effects of sexist attitudes and sexual objectification of women. The results of this study have practical implications, such as supporting the feminine sports teams who ask for more practical and less revealing uniforms, more similar to their male colleagues and therefore less objectifying.Keywords: television exposure, sport, rape myths, objectification, sexism
Procedia PDF Downloads 102222 A Comparison of Biosorption of Radionuclides Tl-201 on Different Biosorbents and Their Empirical Modelling
Authors: Sinan Yapici, Hayrettin Eroglu
Abstract:
The discharge of the aqueous radionuclides wastes used for the diagnoses of diseases and treatments of patients in nuclear medicine can cause fatal health problems when the radionuclides and its stable daughter component mix with underground water. Tl-201, which is one of the radionuclides commonly used in the nuclear medicine, is a toxic substance and is converted to its stable daughter component Hg-201, which is also a poisonous heavy metal: Tl201 → Hg201 + Gamma Ray [135-167 Kev (12%)] + X Ray [69-83 Kev (88%)]; t1/2 = 73,1 h. The purpose of the present work was to remove Tl-201 radionuclides from aqueous solution by biosorption on the solid bio wastes of food and cosmetic industry as bio sorbents of prina from an olive oil plant, rose residue from a rose oil plant and tea residue from a tea plant, and to make a comparison of the biosorption efficiencies. The effects of the biosorption temperature, initial pH of the aqueous solution, bio sorbent dose, particle size and stirring speed on the biosorption yield were investigated in a batch process. It was observed that the biosorption is a rapid process with an equilibrium time less than 10 minutes for all the bio sorbents. The efficiencies were found to be close to each other and measured maximum efficiencies were 93,30 percent for rose residue, 94,1 for prina and 98,4 for tea residue. In a temperature range of 283 and 313 K, the adsorption decreased with increasing temperature almost in a similar way. In a pH range of 2-10, increasing pH enhanced biosorption efficiency up to pH=7 and then the efficiency remained constant in a similar path for all the biosorbents. Increasing stirring speed from 360 to 720 rpm enhanced slightly the biosorption efficiency almost at the same ratio for all bio sorbents. Increasing particle size decreased the efficiency for all biosorbent; however the most negatively effected biosorbent was prina with a decrease in biosorption efficiency from about 84 percent to 40 with an increase in the nominal particle size 0,181 mm to 1,05 while the least effected one, tea residue, went down from about 97 percent to 87,5. The biosorption efficiencies of all the bio sorbents increased with increasing biosorbent dose in the range of 1,5 to 15,0 g/L in a similar manner. The fit of the experimental results to the adsorption isotherms proved that the biosorption process for all the bio sorbents can be represented best by Freundlich model. The kinetic analysis showed that all the processes fit very well to pseudo second order rate model. The thermodynamics calculations gave ∆G values between -8636 J mol-1 and -5378 for tea residue, -5313 and -3343 for rose residue, and -5701 and -3642 for prina with a ∆H values of -39516 J mol-1, -23660 and -26190, and ∆S values of -108.8 J mol-1 K-1, -64,0, -72,0 respectively, showing spontaneous and exothermic character of the processes. An empirical biosorption model in the following form was derived for each biosorbent as function of the parameters and time, taking into account the form of kinetic model, with regression coefficients over 0.9990 where At is biosorbtion efficiency at any time and Ae is the equilibrium efficiency, t is adsorption period as s, ko a constant, pH the initial acidity of biosorption medium, w the stirring speed as s-1, S the biosorbent dose as g L-1, D the particle size as m, and a, b, c, and e are the powers of the parameters, respectively, E a constant containing activation energy and T the temperature as K.Keywords: radiation, diosorption, thallium, empirical modelling
Procedia PDF Downloads 265221 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 473220 Rapid Sexual and Reproductive Health Pathways for Women Accessing Drug and Alcohol Treatment
Authors: Molly Parker
Abstract:
Unintended pregnancy rates in Australia are amongst the highest in the developed world. Women with Substance Use Disorder often have riskier sexual behavior with nil contraceptive use and face disproportionately higher unintended pregnancies and Sexually Transmitted Infections, alongside Substance Use in Pregnancy (SUP) climbing at an alarming rate. In an inner-city Drug and Alcohol (D&A) service, significant barriers to sexual and reproductive health services have been identified, aligning with research. Rapid pathways were created for women seeking D&A treatment to be referred to Sexual and Reproductive Health services for the administration of Long-acting reversible contraception (LARC) and sexual health screening. For clients attending a D&A service, this is an opportunistic time to offer sexual and reproductive health services. Collaboration and multidisciplinary team input between D&A and sexual health and reproductive services are paramount, with rapid referral pathways being identified as the main strategy to improve access to sexual and reproductive health support for this population. With this evidence, a rapid referral pathway was created for women using the D&A service to access LARC, particularly in view of fertility often returning once stable on D&A treatment. A closed-ended survey was used for D&A staff to identify gaps in reproductive health knowledge and views of referral accessibility. Results demonstrated a lack of knowledge of contraception and appropriate referral processes. A closed-ended survey for clients was created to establish the need and access to services and to quantify data. A follow-up data collection will be reviewed to access uptake and satisfaction of the intervention from clients. Sexual health screening access was also identified as a deficit, particularly concerning due to the higher rates of STIs in this cohort. A rapid referral pathway will be undergoing implementation, reducing risks of untreated STIS both pre and post-conception. Similarly, pre and post-intervention structured surveys will be used to identify client satisfaction from the pathway. Although currently in progress, the research and pathway aim to be completed by December 2023. This research and implementation of sexual and reproductive health pathways from the D&A service have significant health and well-being benefits to clients and the wider community, including possible fetal/infancy outcomes. Women now have rapid access to sexual and reproductive health services, with the aim of reducing unplanned pregnancies, poor outcomes associated with SUP, client/staff trauma from termination of pregnancy, and client/staff trauma following the assumption of care of the child due to substance use, the financial cost for out of home care as required, the poor outcomes of untreated STIs to the fetus in pregnancy and the spread of STIs in the wider community. As evidence suggests, the implementation of a streamlined referral process is required between D&A and sexual and reproductive health services and has positive feedback from both clinicians and clients in improving care.Keywords: substance use in pregnancy, drug and alcohol, substance use disorder, sexual health, reproductive health, contraception, long-acting reversible contraception, neonatal abstinence syndrome, FASD, sexually transmitted infections, sexually transmitted infections pregnancy
Procedia PDF Downloads 65219 Predictors of Motor and Cognitive Domains of Functional Performance after Rehabilitation of Individuals with Acute Stroke
Authors: A. F. Jaber, E. Dean, M. Liu, J. He, D. Sabata, J. Radel
Abstract:
Background: Stroke is a serious health care concern and a major cause of disability in the United States. This condition impacts the individual’s functional ability to perform daily activities. Predicting functional performance of people with stroke assists health care professionals in optimizing the delivery of health services to the affected individuals. The purpose of this study was to identify significant predictors of Motor FIM and of Cognitive FIM subscores among individuals with stroke after discharge from inpatient rehabilitation (typically 4-6 weeks after stroke onset). A second purpose is to explore the relation among personal characteristics, health status, and functional performance of daily activities within 2 weeks of stroke onset. Methods: This study used a retrospective chart review to conduct a secondary analysis of data obtained from the Healthcare Enterprise Repository for Ontological Narration (HERON) database. The HERON database integrates de-identified clinical data from seven different regional sources including hospital electronic medical record systems of the University of Kansas Health System. The initial HERON data extract encompassed 1192 records and the final sample consisted of 207 participants who were mostly white (74%) males (55%) with a diagnosis of ischemic stroke (77%). The outcome measures collected from HERON included performance scores on the National Institute of Health Stroke Scale (NIHSS), the Glasgow Coma Scale (GCS), and the Functional Independence Measure (FIM). The data analysis plan included descriptive statistics, Pearson correlation analysis, and Stepwise regression analysis. Results: significant predictors of discharge Motor FIM subscores included age, baseline Motor FIM subscores, discharge NIHSS scores, and comorbid electrolyte disorder (R2 = 0.57, p <0.026). Significant predictors of discharge Cognitive FIM subscores were age, baseline cognitive FIM subscores, client cooperative behavior, comorbid obesity, and the total number of comorbidities (R2 = 0.67, p <0.020). Functional performance on admission was significantly associated with age (p < 0.01), stroke severity (p < 0.01), and length of hospital stay (p < 0.05). Conclusions: our findings show that younger age, good motor and cognitive abilities on admission, mild stroke severity, fewer comorbidities, and positive client attitude all predict favorable functional outcomes after inpatient stroke rehabilitation. This study provides health care professionals with evidence to evaluate predictors of favorable functional outcomes early at stroke rehabilitation, to tailor individualized interventions based on their client’s anticipated prognosis, and to educate clients about the benefits of making lifestyle changes to improve their anticipated rate of functional recovery.Keywords: functional performance, predictors, stroke, recovery
Procedia PDF Downloads 145218 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass
Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski
Abstract:
The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals
Procedia PDF Downloads 435217 The Interactive Wearable Toy "+Me", for the Therapy of Children with Autism Spectrum Disorders: Preliminary Results
Authors: Beste Ozcan, Valerio Sperati, Laura Romano, Tania Moretta, Simone Scaffaro, Noemi Faedda, Federica Giovannone, Carla Sogos, Vincenzo Guidetti, Gianluca Baldassarre
Abstract:
+me is an experimental interactive toy with the appearance of a soft, pillow-like, panda. Shape and consistency are designed to arise emotional attachment in young children: a child can wear it around his/her neck and treat it as a companion (i.e. a transitional object). When caressed on paws or head, the panda emits appealing, interesting outputs like colored lights or amusing sounds, thanks to embedded electronics. Such sensory patterns can be modified through a wirelessly connected tablet: by this, an adult caregiver can adapt +me responses to a child's reactions or requests, for example, changing the light hue or the type of sound. The toy control is therefore shared, as it depends on both the child (who handles the panda) and the adult (who manages the tablet and mediates the sensory input-output contingencies). These features make +me a potential tool for therapy with children with Neurodevelopmental Disorders (ND), characterized by impairments in the social area, like Autism Spectrum Disorders (ASD) and Language Disorders (LD): as a proposal, the toy could be used together with a therapist, in rehabilitative play activities aimed at encouraging simple social interactions and reinforcing basic relational and communication skills. +me was tested in two pilot experiments, the first one involving 15 Typically Developed (TD) children aged in 8-34 months, the second one involving 7 children with ASD, and 7 with LD, aged in 30-48 months. In both studies a researcher/caregiver, during a one-to-one, ten-minute activity plays with the panda and encourages the child to do the same. The purpose of both studies was to ascertain the general acceptability of the device as an interesting toy that is an object able to capture the child's attention and to maintain a high motivation to interact with it and with the adult. Behavioral indexes for estimating the interplay between the child, +me and caregiver were rated from the video recording of the experimental sessions. Preliminary results show how -on average- participants from 3 groups exhibit a good engagement: they touch, caress, explore the panda and show enjoyment when they manage to trigger luminous and sound responses. During the experiments, children tend to imitate the caregiver's actions on +me, often looking (and smiling) at him/her. Interesting behavioral differences between TD, ASD, and LD groups are scored: for example, ASD participants produce a fewer number of smiles both to panda and to a caregiver with respect to TD group, while LD scores stand between ASD and TD subjects. These preliminary observations suggest that the interactive toy +me is able to raise and maintain the interest of toddlers and therefore it can be reasonably used as a supporting tool during therapy, to stimulate pivotal social skills as imitation, turn-taking, eye contact, and social smiles. Interestingly, the young age of participants, along with the behavioral differences between groups, seem to suggest a further potential use of the device: a tool for early differential diagnosis (the average age of a childKeywords: autism spectrum disorders, interactive toy, social interaction, therapy, transitional wearable companion
Procedia PDF Downloads 124216 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89215 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning
Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar
Abstract:
Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence
Procedia PDF Downloads 78214 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices
Authors: Kaustav Mukherjee
Abstract:
In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parametersKeywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss
Procedia PDF Downloads 133213 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 106212 Ambient Factors in the Perception of Crowding in Public Transport
Authors: John Zacharias, Bin Wang
Abstract:
Travel comfort is increasingly seen as crucial to effecting the switch from private motorized modes to public transit. Surveys suggest that travel comfort is closely related to perceived crowding, that may involve lack of available seating, difficulty entering and exiting, jostling and other physical contacts with strangers. As found in studies on environmental stress, other factors may moderate perceptions of crowding–in this case, we hypothesize that the ambient environment may play a significant role. Travel comfort was measured by applying a structured survey to randomly selected passengers (n=369) on 3 lines of the Beijing metro on workdays. Respondents were standing with all seats occupied and with car occupancy at 14 levels. A second research assistant filmed the metro car while passengers were interviewed, to obtain the total number of passengers. Metro lines 4, 6 and 10 were selected that travel through the central city north-south, east-west and circumferentially. Respondents evaluated the following factors: crowding, noise, smell, air quality, temperature, illumination, vibration and perceived safety as they experienced them at the time of interview, and then were asked to rank these 8 factors according to their importance for their travel comfort. Evaluations were semantic differentials on a 7-point scale from highly unsatisfactory (-3) to highly satisfactory (+3). The control variables included age, sex, annual income and trip purpose. Crowding was assessed most negatively, with 41% of the scores between -3 and -2. Noise and air quality were also assessed negatively, with two-thirds of the evaluations below 0. Illumination was assessed most positively, followed by crime, vibration and temperature, all scoring at indifference (0) or slightly positive. Perception of crowding was linearly and positively related to the number of passengers in the car. Linear regression tested the impact of ambient environmental factors on perception of crowding. Noise intensity accounted for more than the actual number of individuals in the car in the perception of crowding, with smell also contributing. Other variables do not interact with the crowding variable although the evaluations are distinct. In all, only one-third of the perception of crowding (R2=.154) is explained by the number of people, with the other ambient environmental variables accounting for two-thirds of the variance (R2=.316). However, when ranking the factors by their importance to travel comfort, perceived crowding made up 69% of the first rank, followed by noise at 11%. At rank 2, smell dominates (25%), followed by noise and air quality (17%). Commuting to work induces significantly lower evaluations of travel comfort with shopping the most positive. Clearly, travel comfort is particularly important to commuters. Moreover, their perception of crowding while travelling on metro is highly conditioned by the ambient environment in the metro car. Focussing attention on the ambient environmental conditions of the metro is an effective way to address the primary concerns of travellers with overcrowding. In general, the strongly held opinions on travel comfort require more attention in the effort to induce ridership in public transit.Keywords: ambient environment, mass rail transit, public transit, travel comfort
Procedia PDF Downloads 264211 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232210 Using the Theory of Reasoned Action and Parental Mediation Theory to Examine Cyberbullying Perpetration among Children and Adolescents
Authors: Shirley S. Ho
Abstract:
The advancement and development of social media have inadvertently brought about a new form of bullying – cyberbullying – that transcends across physical boundaries of space. Although extensive research has been conducted in the field of cyberbullying, most of these studies have taken an overwhelmingly empirical angle. Theories guiding cyberbullying research are few. Furthermore, very few studies have explored the association between parental mediation and cyberbullying, with majority of existing studies focusing on cyberbullying victimization rather than perpetration. Therefore, this present study investigates cyberbullying perpetration from a theoretical angle, with a focus on the Theory of Reasoned Action and the Parental Mediation Theory. More specifically, this study examines the direct effects of attitude, subjective norms, descriptive norms, injunctive norms and active mediation and restrictive mediation on cyberbullying perpetration on social media among children and adolescents in Singapore. Furthermore, the moderating role of age on the relationship between parental mediation and cyberbullying perpetration on social media are examined. A self-administered paper-and-pencil nationally-representative survey was conducted. Multi-stage cluster random sampling was used to ensure that schools from all the four (North, South, East, and West) regions of Singapore were equally represented in the sample used for the survey. In all 607 upper primary school children (i.e., Primary 4 to 6 students) and 782 secondary school adolescents participated in our survey. The total average response rates were 69.6% for student participation. An ordinary least squares hierarchical regression analysis was conducted to test the hypotheses and research questions. The results revealed that attitude and subjective norms were positively associated with cyberbullying perpetration on social media. Descriptive norms and injunctive norms were not found to be significantly associated with cyberbullying perpetration. The results also showed that both parental mediation strategies were negatively associated with cyberbullying perpetration on social media. Age was a significant moderator of both parental mediation strategies and cyberbullying perpetration. The negative relationship between active mediation and cyberbullying perpetration was found to be greater in the case of children than adolescents. Children who received high restrictive parental mediation were less likely to perform cyberbullying behaviors, while adolescents who received high restrictive parental mediation were more likely to be engaged in cyberbullying perpetration. The study reveals that parents should apply active mediation and restrictive mediation in different ways for children and adolescents when trying to prevent cyberbullying perpetration. The effectiveness of active parental mediation for reducing cyberbullying perpetration was more in the case of children than for adolescents. Younger children were found to be more likely to respond more positively toward restrictive parental mediation strategies, but in the case of adolescents, overly restrictive control was found to increase cyberbullying perpetration. Adolescents exhibited less cyberbullying behaviors when under low restrictive strategies. Findings highlight that the Theory of Reasoned Action and Parental Mediation Theory are promising frameworks to apply in the examination of cyberbullying perpetration. The findings that different parental mediation strategies had differing effectiveness, based on the children’s age, bring about several practical implications that may benefit educators and parents when addressing their children’s online risk.Keywords: cyberbullying perpetration, theory of reasoned action, parental mediation, social media, Singapore
Procedia PDF Downloads 254209 The Language of COVID-19: Psychological Effects of the Label 'Essential Worker' on Spanish-Speaking Adults
Authors: Natalia Alvarado, Myldred Hernandez-Gonzalez, Mary Laird, Madeline Phillips, Elizabeth Miller, Luis Mendez, Teresa Satterfield Linares
Abstract:
Objectives: Focusing on the reported levels of depressive symptoms from Hispanic individuals in the U.S. during the ongoing COVID-19 pandemic, we analyze the psychological effects of being labeled an ‘essential worker/trabajador(a) esencial.’ We situate this attribute within the complex context of how an individual’s mental health is linked to work status and his/her community’s attitude toward such a status. Method: 336 Spanish-speaking adults (Mage = 34.90; SD = 11.00; 46% female) living in the U.S. participated in a mixed-method study. Participants completed a self-report Spanish-language survey consisting of COVID-19 prompts (e.g., Soy un trabajador esencial durante la pandemia. I am an ‘essential worker’ during the pandemic), civic engagement scale (CES) attitudes (e.g., Me siento responsable de mi comunidad. I feel responsible for my community) and behaviors (e.g., Ayudo a los miembros de mi comunidad. I help members of my community), and the Center for Epidemiological Studies Depression Scale (e.g., Me sentía deprimido/a. I felt depressed). The survey was conducted several months into the pandemic and before the vaccine distribution. Results: Regression analyses show that being labeled an essential worker was correlated to CES attitudes (b= .28, p < .001) and higher CES behaviors (b= .32, p < .001). Essential worker status also reported higher levels of depressive symptoms (b= .17, p < .05). In addition, we found that CES attitudes and CES behaviors were related to higher levels of depressive symptoms (b= .11, p <.05, b = .22, p < .001, respectively). These findings suggest that those who are on the frontlines during the COVID-19 pandemic suffer higher levels of depressive symptoms, despite their affirming community attitudes and behaviors. Discussion: Hispanics/Latinxs make up 53% of the high-proximity employees who must work in person and in close contact with others; this is the highest rate of any racial or ethnic category. Moreover, 31% of Hispanics are classified as essential workers. Our outcomes show that those labeled as trabajadores esenciales convey attitudes of remaining strong and resilient for COVID-19 victims. They also express community attitudes and behaviors reflecting a sense of responsibility to continue working to help others during these unprecedented times. However, we also find that the pressure of maintaining basic needs for others exacerbates mental health challenges and stressors, as many essential workers are anxious and stressed about their physical and economic security. As a result, community attitudes do not protect from depressive symptoms as Hispanic essential workers are failing to balance everyone’s needs, including their own (e.g., physical exhaustion and psychological distress). We conclude with a discussion on alternatives to the phrase ‘essential worker’ and of incremental steps that can be taken to address pandemic-related mental health issues targeting US Hispanic workers.Keywords: COVID-19, essential worker, mental health, race and ethnicity
Procedia PDF Downloads 129208 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater
Authors: Emmanuel C. Ngerem
Abstract:
The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization
Procedia PDF Downloads 43207 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 263206 Role of ASHA in Utilizing Maternal Health Care Services India, Evidences from National Rural Health Mission (NRHM)
Authors: Dolly Kumari, H. Lhungdim
Abstract:
Maternal health is one of the crucial health indicators for any country. 5th goal of Millennium Development Goals is also emphasising on improvement of maternal health. Soon after Independence government of India realizing the importance of maternal and child health care services, and took steps to strengthen in 1st and 2nd five year plans. In past decade the other health indicator which is life expectancy at birth has been observed remarkable improvement. But still maternal mortality is high in India and in some states it is observe much higher than national average. Government of India pour lots of fund and initiate National Rural Health Mission (NRHM) in 2005 to improve maternal health in country by providing affordable and accessible health care services. Accredited Social Heath Activist (ASHA) is one of the key components of the NRHM. Mainly ASHAs are selected female aged 25-45 years from village itself and accountable for the monitoring of maternal health care for the same village. ASHA are trained to works as an interface between the community and public health system. This study tries to assess the role of ASHA in utilizing maternal health care services and to see the level of awareness about benefits given under JSY scheme and utilization of those benefits by eligible women. For the study concurrent evaluation data from National Rural health Mission (NRHM), initiated by government of India in 2005 has been used. This study is based on 78205 currently married women from 70 different districts of India. Descriptive statistics, chi2 test and binary logistic regression have been used for analysis. The probability of institutional delivery increases by 2.03 times (p<0.001) while if ASHA arranged or helped in arranging transport facility the probability of institutional delivery is increased by 1.67 times (p<0.01) than if she is not arranging transport facility. Further if ASHA facilitated to get JSY card to the pregnant women probability of going for full ANC is increases by 1.36 times (p<0.05) than reference. However if ASHA discuses about institutional delivery and approaches to get register than probability of getting TT injection is 1.88 and 1.64 times (p<0.01) higher than that if she did not discus. Further, Probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 years. The probability of benefits from JSY schemes is 1.25 times (p<0.001) higher among women who get married after 18 year of age than before 18 years, it is also 1.28 times (p<0.001) and 1.32 times (p<0.001) higher among women have 1 to 8 year of schooling and with 9 and above years of schooling respectively than the women who never attended school. Those women who are working have 1.13 times (p<0.001) higher probability of getting benefits from JSY scheme than not working women. Surprisingly women belongs to wealthiest quintile are .53times (P<0.001) less aware about JSY scheme. Results conclude that work done by ASHA has great influence on maternal health care utilization in India. But results also show that still substantial numbers of needed population are far from utilization of these services. Place of delivery is significantly influenced by referral and transport facility arranged by ASHA.Keywords: institutional delivery, JSY beneficiaries, referral faculty, public health
Procedia PDF Downloads 331205 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 104204 Challenging Role of Talent Management, Career Development and Compensation Management toward Employee Retention and Organizational Performance with Mediating Effect of Employee Motivation in Service Sector of Pakistan
Authors: Muhammad Younas, Sidra Sawati, M. Razzaq Athar
Abstract:
Organizational development history reveals that it has ever been a challenge to identify and fathom the role of talent management, career development and compensation management towards employees’ retention and organizational performance. Organizations strive hard to measure the impact of all those factors which affect employee retention and organizational performance. Researchers have worked in great deal in order to know the relationship of independent variables i.e. Talent Management, Career Development and Compensation Management on dependent variables i.e. Employee Retention and Organizational Performance. Employees adorned with latest skills with long lasting loyalty play a significant role towards successful achievement of short term as well as long term goals of the organizations. Retention of valuable and resourceful employees for a longer time is equally essential for meeting the set goals. The organizations which spend reasonable chunk of their resources for taking such measures that help to retain their employees through talent management and satisfactory career development always enjoy a competitive edge over their competitors. Human resource is regarded as one of the most precious and difficult resource to management. It has its own needs and requirement. It becomes an easy prey to monotony when lacks career development. Wants and aspirations of this resource are seldom met completely but can be managed through career development and compensation management. In this era of competition, organizations have to take viable steps to management their resources especially human resource. Top management and Managers keep on working for an amenable solution in order to address the challenges relating career development and compensation management as their ultimate goal is to ensure the organizational performance on optimum level. The current study was conducted to examine the impact of Talent Management, Career Development and Compensation Management towards Employees Retention and Organizational Performance with mediating effect of Employees Motivation in Service Sector of Pakistan. The current study is based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) theories. It explains that by increasing internal resources we can manage employee talent, career development through compensation management and employee motivation more effectively. It will result in effective execution of HRM practices for employee retention enabling an organization to achieve and sustain competitive advantage through optimal performance. Data collection was made through a structured questionnaire which was based upon adopted instruments after testing reliability and validity. A total 300 employees of 30 firms in service sector of Pakistan were sampled through non-probability sampling technique. Regression analysis revealed that talent management, career development and compensation management have significant positive impact on employee retention and perceived organizational performance. The results further showed that employee motivation have a significant mediating effect on employee retention and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are also discussed.Keywords: career development, compensation management, employee retention, organizational performance, talent management
Procedia PDF Downloads 320203 Practice Based Approach to the Development of Family Medicine Residents’ Educational Environment
Authors: Lazzat M. Zhamaliyeva, Nurgul A. Abenova, Gauhar S. Dilmagambetova, Ziyash Zh. Tanbetova, Moldir B. Ahmetzhanova, Tatyana P. Ostretcova, Aliya A. Yegemberdiyeva
Abstract:
Introduction: There are many reasons for the weak training of family doctors in Kazakhstan: the unified national educational program is not focused on competencies, the role of a general practitioner (GP) is not clear, poor funding for the health care and education system, outdated teaching and assessment methods, inefficient management. We highlight two issues in particular. Firstly, academic teachers of family medicine (FM) in Kazakhstan do not practice as family doctors; most of them are narrow specialists (pediatricians, therapists, surgeons, etc.); they usually hold one-time consultations; clinical mentors from practical healthcare (non-academic teachers) do not have the teaching competences, and the vast majority of them are also narrow specialists. Secondly, clinical sites (polyclinics) are unprepared for general practice and do not follow the principles of family medicine; residents do not like to be in primary health care (PHC) settings due to the chaos that is happening there, as well as due to the lack of the necessary equipment for mastering and consolidating practical skills. Aim: We present the concept of the family physicians’ training office (FPTO), which is being created as a friendly learning environment for young general practitioners and for the involvement of academic teachers of family medicine in the practical work and innovative development of PHC. Methodology: In developing the conceptual framework and identifying practical activities, we drew on literature and expert input, and interviews. Results: The goal of the FPTO is to create a favorable educational and clinical environment for the development of the FM residents’ competencies, in which the residents with academic teachers and clinical mentors could understand and accept the principles of family medicine, improve clinical knowledge and skills, and gain experience in improving the quality of their practice in scientific basis. Three main areas of office activity are providing primary care to the patients, improving educational services for FM residents and other medical workers, and promoting research in PHC and innovations. The office arranges for residents to see outpatients at least 50% of the time, and teachers of FM departments at least 1/4 of their working time conduct general medical appointments next to residents. Taking into account the educational and scientific workload, the number of attached population for one GP does not exceed 500 persons. The equipment of the office allows FPTO workers to perform invasive and other manipulations without being sent to other clinics. In the office, training for residents is focused on their needs and aimed at achieving the required level of competence. International methodologies and assessment tools are adapted to local conditions and evaluated for their effectiveness and acceptability. Residents and their faculty actively conduct research in the field of family medicine. Conclusions: We propose to change the learning environment in order to create teams of like-minded people, to unite residents and teachers even more for the development of family medicine. The offices will also invest resources in developing and maintaining young doctors' interest in family medicine.Keywords: educational environment, family medicine residents, family physicians’ training office, primary care research
Procedia PDF Downloads 134