Search results for: algorithms decision tree
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6146

Search results for: algorithms decision tree

386 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 40
385 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 52
384 Applications of Multi-Path Futures Analyses for Homeland Security Assessments

Authors: John Hardy

Abstract:

A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.

Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning

Procedia PDF Downloads 120
383 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 96
382 Positive-Negative Asymmetry in the Evaluations of Political Candidates: The Mediating Role of Affect in the Relationship between Cognitive Evaluation and Voting Intention

Authors: Magdalena Jablonska, Andrzej Falkowski

Abstract:

The negativity effect is one of the most intriguing and well-studied psychological phenomena that can be observed in many areas of human life. The aim of the following study is to investigate how valence framing and positive and negative information about political candidates affect judgments about similarity to an ideal and bad politician. Based on the theoretical framework of features of similarity, it is hypothesized that negative features have a stronger effect on similarity judgments than positive features of comparable value. Furthermore, the mediating role of affect is tested. Method: One hundred sixty-one people took part in an experimental study. Participants were divided into 6 research conditions that differed in the reference point (positive vs negative framing) and the number of favourable and unfavourable information items about political candidates (a positive, neutral and negative candidate profile). In positive framing condition, the concept of an ideal politician was primed; in the negative condition, participants were to think about a bad politician. The effect of independent variables on similarity judgments, affective evaluation, and voting intention was tested. Results: In the positive condition, the analysis showed that the negative effect of additional unfavourable features was greater than the positive effect of additional favourable features in judgements about similarity to the ideal candidate. In negative framing condition, ANOVA was insignificant, showing that neither the addition of positive features nor additional negative information had a significant impact on the similarity to a bad political candidate. To explain this asymmetry, two mediational analyses were conducted that tested the mediating role of affect in the relationship between similarity judgments and voting intention. In both situations the mediating effect was significant, but the comparison of two models showed that the mediation was stronger for a negative framing. Discussion: The research supports the negativity effect and attempts to explain the psychological mechanism behind the positive-negative asymmetry. The results of mediation analyses point to a stronger mediating role of affect in the relationship between cognitive evaluation and voting intention. Such a result suggests that negative comparisons, leading to the activation of negative features, give rise to stronger emotions than positive features of comparable strength. The findings are in line with positive-negative asymmetry, however, by adopting Tversky’s framework of features of similarity, the study integrates the cognitive mechanism of the negativity effect delineated in the contrast model of similarity with its emotional component resulting from the asymmetrical effect of positive and negative emotions on decision-making.

Keywords: affect, framing, negativity effect, positive-negative asymmetry, similarity judgements

Procedia PDF Downloads 166
381 Technology Changing Senior Care

Authors: John Kosmeh

Abstract:

Introduction – For years, senior health care and skilled nursing facilities have been plagued with the dilemma of not having the necessary tools and equipment to adequately care for senior residents in their communities. This has led to high transport rates to emergency departments and high 30-day readmission rates, costing billions of unnecessary dollars each year, as well as quality assurance issues. Our Senior care telemedicine program is designed to solve this issue. Methods – We conducted a 1-year pilot program using our technology coupled with our 24/7 telemedicine program with skilled nursing facilities in different parts of the United States. We then compared transports rates and 30-day readmission rates to previous years before the use of our program, as well as transport rates of other communities of similar size not using our program. This data was able to give us a clear and concise look at the success rate of reducing unnecessary transport and readmissions as well as cost savings. Results – A 94% reduction nationally of unnecessary out-of-facility transports, and to date, complete elimination of 30-day readmissions. Our virtual platform allowed us to instruct facility staff on the utilization of our tools and system as well as deliver treatment by our ER-trained providers. Delay waiting for PCP callbacks was eliminated. We were able to obtain lung, heart, and abdominal ultrasound imaging, 12 lead EKG, blood labs, auscultate lung and heart sounds, and collect other diagnostic tests at the bedside within minutes, providing immediate care and allowing us to treat residents within the SNF. Are virtual capabilities allowed for loved ones, family members, and others who had medical power of attorney to virtually connect with us at the time of visit, to speak directly with the medical provider, providing increased confidence in the decision to treat the resident in-house. The decline in transports and readmissions will greatly reduce governmental cost burdens, as well as fines imposed on SNF for high 30-day readmissions, reduce the cost of Medicare A readmissions, and significantly impact the number of patients visiting overcrowded ERs. Discussion – By utilizing our program, SNF can effectively reduce the number of unnecessary transports of residents, as well as create significant savings from loss of day rates, transportation costs, and high CMS fines. The cost saving is in the thousands monthly, but more importantly, these facilities can create a higher quality of life and medical care for residents by providing definitive care instantly with ER-trained personnel.

Keywords: senior care, long term care, telemedicine, technology, senior care communities

Procedia PDF Downloads 68
380 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 205
379 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions

Authors: Mingming Leng

Abstract:

Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.

Keywords: bargaining, game theory, pricing, quality, supply chain

Procedia PDF Downloads 251
378 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company

Authors: Milica Lilic, Marina Rosales Martínez

Abstract:

Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.

Keywords: good practice, knowledge transfer, a spin-off company, university

Procedia PDF Downloads 118
377 A Study on Relationship between Firm Managers Environmental Attitudes and Environment-Friendly Practices for Textile Firms in India

Authors: Anupriya Sharma, Sapna Narula

Abstract:

Over the past decade, sustainability has gone mainstream as more people are worried about environment-related issues than ever before. These issues are of even more concern for industries which leave a significant impact on the environment. Following these ecological issues, corporates are beginning to comprehend the impact on their business. Many such initiatives have been made to address these emerging issues in the consumer-driven textile industry. Demand from customers, local communities, government regulations, etc. are considered some of the major factors affecting environmental decision-making. Research also shows that motivations to go green are inevitably determined by the way top managers perceive environmental issues as managers personal values and ethical commitment act as a motivating factor towards corporate social responsibility. Little empirical research has been conducted to examine the relationship between top managers’ personal environmental attitudes and corporate environmental behaviors for the textile industry in the Indian context. The primary purpose of this study is to determine the current state of environmental management in textile industry and whether the attitude of textile firms’ top managers is significantly related to firm’s response to environmental issues and their perceived benefits of environmental management. To achieve the aforesaid objectives of the study, authors used structured questionnaire based on literature review. The questionnaire consisted of six sections with a total length of eight pages. The first section was based on background information on the position of the respondents in the organization, annual turnover, year of firm’s establishment and so on. The other five sections of the questionnaire were based upon (drivers, attitude, and awareness, sustainable business practices, barriers to implementation and benefits achieved). To test the questionnaire, a pretest was conducted with the professionals working in corporate sustainability and had knowledge about the textile industry and was then mailed to various stakeholders involved in textile production thereby covering firms top manufacturing officers, EHS managers, textile engineers, HR personnel and R&D managers. The results of the study showed that most of the textile firms were implementing some type of environmental management practice, even though the magnitude of firm’s involvement in environmental management practices varied. The results also show that textile firms with a higher level of involvement in environmental management were more involved in the process driven technical environmental practices. It also identified that firm’s top managers environmental attitudes were correlated with perceived advantages of environmental management as textile firm’s top managers are the ones who possess managerial discretion on formulating and deciding business policies such as environmental initiatives.

Keywords: attitude and awareness, Environmental management, sustainability, textile industry

Procedia PDF Downloads 212
376 The Effectiveness of an Educational Program on Awareness of Cancer Signs, Symptoms, and Risk Factors among School Students in Oman

Authors: Khadija Al-Hosni, Moon Fai Chan, Mohammed Al-Azri

Abstract:

Background: Several studies suggest that most school-age adolescents are poorly informed on cancer warning signs and risk factors. Providing adolescents with sufficient knowledge would increase their awareness in adulthood and improve seeking behaviors later. Significant: The results will provide a clear vision in assisting key decision-makers in formulating policies on the students' awareness programs towards cancer. So, the likelihood of avoiding cancer in the future will be increased or even promote early diagnosis. Objectives: to evaluate the effectiveness of an education program designed to increase awareness of cancer signs and symptoms risk factors, improve the behavior of seeking help among school students in Oman, and address the barriers to obtaining medical help. Methods: A randomized controlled trial with two groups was conducted in Oman. A total of 1716 students (n=886/control, n= 830/education), aged 15-17 years, at 10th and 11th grade from 12 governmental schools 3 in governorates from 20-February-2022 to 12-May-2022. Basic demographic data were collected, and the Cancer Awareness Measure (CAM) was used as the primary outcome. Data were collected at baseline (T0) and 4 weeks after (T1). The intervention group received an education program about cancer's cause and its signs and symptoms. In contrast, the control group did not receive any education related to this issue during the study period. Non-parametric tests were used to compare the outcomes between groups. Results: At T0, the lamp was the most recognized cancer warning sign in control (55.0%) and intervention (55.2%) groups. However, there were no significant changes at T1 for all signs in the control group. In contrast, all sign outcomes were improved significantly (p<0.001) in the intervention group, the highest response was unexplained pain (93.3%). Smoking was the most recognized risk factor in both groups: (82.8% for control; 84.1% for intervention) at T0. However, there was no significant change in T1 for the control group, but there was for the intervention group (p<0.001), the highest identification was smoking cigarettes (96.5%). Too scared was the largest barrier to seeking medical help by students in the control group at T0 (63.0%) and T1 (62.8%). However, there were no significant changes in all barriers in this group. Otherwise, being too embarrassed (60.2%) was the largest barrier to seeking medical help for students in the intervention group at T0 and too scared (58.6%) at T1. Although there were reductions in all barriers, significant differences were found in six of ten only (p<0.001). Conclusion: The intervention was effective in improving students' awareness of cancer symptoms, warning signs (p<0.001), and risk factors (p<0.001 reduced the most addressed barriers to seeking medical help (p<0.001) in comparison to the control group. The Ministry of Education in Oman could integrate awareness of cancer within the curriculum, and more interventions are needed on the sociological part to overcome the barriers that interfere with seeking medical help.

Keywords: adolescents, awareness, cancer, education, intervention, student

Procedia PDF Downloads 58
375 Evaluation of Prehabilitation Prior to Surgery for an Orthopaedic Pathway

Authors: Stephen McCarthy, Joanne Gray, Esther Carr, Gerard Danjoux, Paul Baker, Rhiannon Hackett

Abstract:

Background: The Go Well Health (GWH) platform is a web-based programme that allows patients to access personalised care plans and resources, aimed at prehabilitation prior to surgery. The online digital platform delivers essential patient education and support for patients prior to undergoing total hip replacements (THR) and total knee replacements (TKR). This study evaluated the impact of an online digital platform (ODP) in terms of functional health outcomes, health related quality of life and hospital length of stay following surgery. Methods: A retrospective cohort study comparing a cohort of patients who used the online digital platform (ODP) to deliver patient education and support (PES) prior to undergoing THR and TKR surgery relative to a cohort of patients who did not access the ODP and received usual care. Routinely collected Patient Reported Outcome Measures (PROMs) data was obtained on 2,406 patients who underwent a knee replacement (n=1,160) or a hip replacement (n=1,246) between 2018 and 2019 in a single surgical centre in the United Kingdom. The Oxford Hip and Knee Score and the European Quality of Life Five-Dimensional tool (EQ5D-5L) was obtained both pre-and post-surgery (at 6 months) along with hospital LOS. Linear regression was used to compare the estimate the impact of GWH on both health outcomes and negative binomial regressions were used to impact on LOS. All analyses adjusted for age, sex, Charlson Comorbidity Score and either pre-operative Oxford Hip/Knee scores or pre-operative EQ-5D scores. Fractional polynomials were used to represent potential non-linear relationships between the factors included in the regression model. Findings: For patients who underwent a knee replacement, GWH had a statistically significant impact on Oxford Knee Scores and EQ5D-5L utility post-surgery (p=0.039 and p=0.002 respectively). GWH did not have a statistically significant impact on the hospital length of stay. For those patients who underwent a hip replacement, GWH had a statistically significant impact on Oxford Hip Scores and EQ5D-5L utility post (p=0.000 and p=0.009 respectively). GWH also had a statistically significant reduction in the hospital length of stay (p=0.000). Conclusion: Health Outcomes were higher for patients who used the GWH platform and underwent THR and TKR relative to those who received usual care prior to surgery. Patients who underwent a hip replacement and used GWH also had a reduced hospital LOS. These findings are important for health policy and or decision makers as they suggest that prehabilitation via an ODP can maximise health outcomes for patients following surgery whilst potentially making efficiency savings with reductions in LOS.

Keywords: digital prehabilitation, online digital platform, orthopaedics, surgery

Procedia PDF Downloads 168
374 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 27
373 Technology, Ethics and Experience: Understanding Interactions as Ethical Practice

Authors: Joan Casas-Roma

Abstract:

Technology has become one of the main channels through which people engage in most of their everyday activities; from working to learning, or even when socializing, technology often acts as both an enabler and a mediator of such activities. Moreover, the affordances and interactions created by those technological tools determine the way in which the users interact with one another, as well as how they relate to the relevant environment, thus favoring certain kinds of actions and behaviors while discouraging others. In this regard, virtue ethics theories place a strong focus on a person's daily practice (understood as their decisions, actions, and behaviors) as the means to develop and enhance their habits and ethical competences --such as their awareness and sensitivity towards certain ethically-desirable principles. Under this understanding of ethics, this set of technologically-enabled affordances and interactions can be seen as the possibility space where the daily practice of their users takes place in a wide plethora of contexts and situations. At this point, the following question pops into mind: could these affordances and interactions be shaped in a way that would promote behaviors and habits basedonethically-desirable principles into their users? In the field of game design, the MDA framework (which stands for Mechanics, Dynamics, Aesthetics) explores how the interactions enabled within the possibility space of a game can lead to creating certain experiences and provoking specific reactions to the players. In this sense, these interactions can be shaped in ways thatcreate experiences to raise the players' awareness and sensitivity towards certain topics or principles. This research brings together the notions of technological affordances, the notions of practice and practical wisdom from virtue ethics, and the MDA framework from game design in order to explore how the possibility space created by technological interactions can be shaped in ways that enable and promote actions and behaviors supporting certain ethically-desirable principles. When shaped accordingly, interactions supporting certain ethically-desirable principlescould allow their users to carry out the kind of practice that, according to virtue ethics theories, provides the grounds to develop and enhance their awareness, sensitivity, and ethical reasoning capabilities. Moreover, and because ethical practice can happen collaterally in almost every context, decision, and action, this additional layer could potentially be applied in a wide variety of technological tools, contexts, and functionalities. This work explores the theoretical background, as well as the initial considerations and steps that would be needed in order to harness the potential ethically-desirable benefits that technology can bring, once it is understood as the space where most of their users' daily practice takes place.

Keywords: ethics, design methodology, human-computer interaction, philosophy of technology

Procedia PDF Downloads 130
372 The Management of Company Directors Conflicts of Interest in Large Corporations and the Issue of Public Interest

Authors: Opemiposi Adegbulu

Abstract:

The research investigates the existence of a public interest consideration or rationale for the management of directors’ conflicts of interest within large public corporations. This is conducted through extensive literature review and theories on the definition of conflicts of interest, the firm and purposes of the fiduciary duty of loyalty under which the management of these conflicts of interest find their foundation. Conflicts of interest is an elusive, diverse and engaging subject, a cross-cutting problem of governance which involves all levels of governance, ranging from local to global, public to corporate or financial sectors. It is a common issue that affects corporate governance and corporate culture, having a negative impact on the reputation of corporations and their trustworthiness. It is clear that addressing this issue is imperative for good governance of corporations as they are increasingly becoming and are powerful global economies with significant power and influence in the society. Similarly, the bargaining power of these powerful corporations has been recognised by international organisations such as the UN and the OECD. This is made evident by the increasing calls and push for greater responsibility of these corporations for environmental and social disasters caused by their corporate activities and their impact in various parts of the world. Equally, in the US, the Sarbanes-Oxley Act like other legislation and regulatory efforts made to manage conflicts of interest linked to corporate governance, in many countries indicates that there is a (global) public interest in the maintenance of the orderly functioning of commerce. Consequently, the governance of these corporations is tremendously pivotal to the society as it touches upon a key aspect of the good functioning of society. This is because corporations, particularly large international corporations can be said to be the plumbing of the global economy. This study will employ theoretical, doctrinal and comparative methods. The research will make use largely of theory-guided methodology and theoretical framework – theories of the firm, public interest, regulation, conflicts of interest in general, directors’ conflicts of interest and corporate governance. Although, the research is intended to be narrowed down to the topic of conflicts of interest in corporate governance, the subject of company directors’ duty of loyalty and the management of conflicts of interest, an examination of the history, origin and typology of conflicts of interest in general will be carried out in order to identify some specific challenges to understanding and identifying these conflicts of interest; origin, diverging theories, psychological barrier to definition, similarities with public sector conflicts of interest due to the notions of corrosion of trust, the effect on decision-making and judgment, “being in a particular kind of situation”, etc. The result of this research will be useful and relevant in the identification of the rationale for the management of directors’ conflicts of interest, contributing to the understanding of conflicts of interest in the private sector and the significance of public interest in corporate governance of large corporations.

Keywords: conflicts of interest, corporate governance, corporate law, directors duty of loyalty, public interest

Procedia PDF Downloads 332
371 Developing Methodology of Constructing the Unified Action Plan for External and Internal Risks in University

Authors: Keiko Tamura, Munenari Inoguchi, Michiyo Tsuji

Abstract:

When disasters occur, in order to raise the speed of each decision making and response, it is common that delegation of authority is carried out. This tendency is particularly evident when the department or branch of the organization are separated by the physical distance from the main body; however, there are some issues to think about. If the department or branch is too dependent on the head office in the usual condition, they might feel lost in the disaster response operation when they are face to the situation. Avoiding this problem, an organization should decide how to delegate the authority and also who accept the responsibility for what before the disaster. This paper will discuss about the method which presents an approach for executing the delegation of authority process, implementing authorities, management by objectives, and preparedness plans and agreement. The paper will introduce the examples of efforts for the three research centers of Niigata University, Japan to arrange organizations capable of taking necessary actions for disaster response. Each center has a quality all its own. One is the center for carrying out the research in order to conserve the crested ibis (or Toki birds in Japanese), the endangered species. The another is the marine biological laboratory. The third one is very unique because of the old growth forests maintained as the experimental field. Those research centers are in the Sado Island, located off the coast of Niigata Prefecture, is Japan's second largest island after Okinawa and is known for possessing a rich history and culture. It takes 65 minutes jetfoil (high-speed ferry) ride to get to Sado Island from the mainland. The three centers are expected to be easily isolated at the time of a disaster. A sense of urgency encourages 3 centers in the process of organizational restructuring for enhancing resilience. The research team from the risk management headquarters offer those procedures; Step 1: Offer the hazard scenario based on the scientific evidence, Step 2: Design a risk management organization for disaster response function, Step 3: Conduct the participatory approach to make consensus about the overarching objectives, Step 4: Construct the unified operational action plan for 3 centers, Step 5: Simulate how to respond in each phase based on the understanding the various phases of the timeline of a disaster. Step 6: Document results to measure performance and facilitate corrective action. This paper shows the result of verifying the output and effects.

Keywords: delegation of authority, disaster response, risk management, unified command

Procedia PDF Downloads 102
370 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 214
369 Secondhand Clothing and the Future of Fashion

Authors: Marike Venter de Villiers, Jessica Ramoshaba

Abstract:

In recent years, the fashion industry has been associated with the exploitation of both people and resources. This is largely due to the emergence of the fast fashion concept, which entails rapid and continual style changes where clothes quickly lose their appeal, become out-of-fashion, and are then disposed of. This cycle often entails appalling working conditions in sweatshops with low wages, child labor, and a significant amount of textile waste that ends up in landfills. Although the awareness of the negative implications of ‘mindless fashion production and consumption’ is growing, fast fashion remains to be a popular choice among the youth. This is especially prevalent in South Africa, a poverty-stricken country where a vast number of young adults are unemployed and living in poverty. Despite being in poverty, the celebrity conscious culture and fashion products frequently portrayed on the growing intrusive social media platforms in South Africa pressurizes the consumers to purchase fashion and luxury products. Young adults are therefore more vulnerable to the temptation to purchase fast fashion products. A possible solution to the detrimental effects that the fast fashion industry has on the environment is the revival of the secondhand clothing trend. Although the popularity of secondhand clothing has gained momentum among selected consumer segments, the adoption rate of such remains slow. The main purpose of this study was to explore consumers’ perceptions of the secondhand clothing trend and to gain insight into factors that inhibit the adoption of secondhand clothing. This study also aimed to investigate whether consumers are aware of the negative implications of the fast fashion industry and their likelihood to shift their clothing purchases to that of secondhand clothing. By means of a quantitative study, fifty young females were asked to complete a semi-structured questionnaire. The researcher approached females between the ages of 18 and 35 in a face-to-face setting. The results indicated that although they had an awareness of the negative consequences of fast fashion, they lacked detailed insight into the pertinent effects of fast fashion on the environment. Further, a number of factors inhibit their decision to buy from secondhand stores: firstly, the accessibility to the latest trends was not always available in secondhand stores; secondly, the convenience of shopping from a chain store outweighs the inconvenience of searching for and finding a secondhand store; and lastly, they perceived secondhand clothing to pose a hygiene risk. The findings of this study provide fashion marketers, and secondhand clothing stores, with insight into how they can incorporate the secondhand clothing trend into their strategies and marketing campaigns in an attempt to make the fashion industry more sustainable.

Keywords: eco-friendly fashion, fast fashion, secondhand clothing, eco-friendly fashion

Procedia PDF Downloads 109
368 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 320
367 The Impact of Formulate and Implementation Strategy for an Organization to Better Financial Consequences in Malaysian Private Hospital

Authors: Naser Zouri

Abstract:

Purpose: Measures of formulate and implementation strategy shows amount of product rate-market based strategic management category such as courtesy, competence, and compliance to reach the high loyalty of financial ecosystem. Despite, it solves the market place error intention to fair trade organization. Finding: Finding shows the ability of executives’ level of management to motivate and better decision-making to solve the treatments in business organization. However, it made ideal level of each interposition policy for a hypothetical household. Methodology/design. Style of questionnaire about the data collection was selected to survey of both pilot test and real research. Also, divide of questionnaire and using of Free Scale Semiconductor`s between the finance employee was famous of this instrument. Respondent`s nominated basic on non-probability sampling such as convenience sampling to answer the questionnaire. The way of realization costs to performed the questionnaire divide among the respondent`s approximately was suitable as a spend the expenditure to reach the answer but very difficult to collect data from hospital. However, items of research survey was formed of implement strategy, environment, supply chain, employee from impact of implementation strategy on reach to better financial consequences and also formulate strategy, comprehensiveness strategic design, organization performance from impression on formulate strategy and financial consequences. Practical Implication: Dynamic capability approach of formulate and implement strategy focuses on the firm-specific processes through which firms integrate, build, or reconfigure resources valuable for making a theoretical contribution. Originality/ value of research: Going beyond the current discussion, we show that case studies have the potential to extend and refine theory. We present new light on how dynamic capabilities can benefit from case study research by discovering the qualifications that shape the development of capabilities and determining the boundary conditions of the dynamic capabilities approach. Limitation of the study :Present study also relies on survey of methodology for data collection and the response perhaps connection by financial employee was difficult to responds the question because of limitation work place.

Keywords: financial ecosystem, loyalty, Malaysian market error, dynamic capability approach, rate-market, optimization intelligence strategy, courtesy, competence, compliance

Procedia PDF Downloads 272
366 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 51
365 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems

Authors: Alejandro Adorjan

Abstract:

Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.

Keywords: calculus, engineering education, PreCalculus, Summer Program

Procedia PDF Downloads 257
364 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth

Authors: Caroline Atef Shoukry Tadros

Abstract:

Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.

Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science

Procedia PDF Downloads 46
363 A Comparison of the Microbiology Profile for Periprosthetic Joint Infection (PJI) of Knee Arthroplasty and Lower Limb Endoprostheses in Tumour Surgery

Authors: Amirul Adlan, Robert A McCulloch, Neil Jenkins, MIchael Parry, Jonathan Stevenson, Lee Jeys

Abstract:

Background and Objectives: The current antibiotic prophylaxis for oncological patients is based upon evidence from primary arthroplasty despite significant differences in both patient group and procedure. The aim of this study was to compare the microbiology organisms responsible for PJI in patients who underwent two-stage revision for infected primary knee replacement with those of infected oncological endoprostheses of the lower limb in a single institution. This will subsequently guide decision making regarding antibiotic prophylaxis at primary implantation for oncological procedures and empirical antibiotics for infected revision procedures (where the infecting organism(s) are unknown). Patient and Methods: 118 patients were treated with two-stage revision surgery for infected knee arthroplasty and lower limb endoprostheses between 1999 and 2019. 74 patients had two-stage revision for PJI of knee arthroplasty, and 44 had two-stage revision of lower limb endoprostheses. There were 68 males and 50 females. The mean age for the knee arthroplasty cohort and lower limb endoprostheses cohort were 70.2 years (50-89) and 36.1 years (12-78), respectively (p<0.01). Patient host and extremity criteria were categorised according to the MSIS Host and Extremity Staging System. Patient microbiological culture, the incidence of polymicrobial infection and multi-drug resistance (MDR) were analysed and recorded. Results: Polymicrobial infection was reported in 16% (12 patients) from knee arthroplasty PJI and 14.5% (8 patients) in endoprostheses PJI (p=0.783). There was a significantly higher incidence of MDR in endoprostheses PJI, isolated in 36.4% of cultures, compared to knee arthroplasty PJI (17.2%) (p=0.01). Gram-positive organisms were isolated in more than 80% of cultures from both cohorts. Coagulase-negative Staphylococcus (CoNS) was the commonest gram-positive organism, and Escherichia coli was the commonest Gram-negative organism in both groups. According to the MSIS staging system, the host and extremity grade of knee arthroplasty PJI cohort were significantly better than endoprostheses PJI(p<0.05). Conclusion: Empirical antibiotic management of PJI in orthopaedic oncology is based upon PJI in arthroplasty despite differences in both host and microbiology. Our results show a significant increase in MDR pathogens within the oncological group despite CoNS being the most common infective organism in both groups. Endoprosthetic patients presented with poorer host and extremity criteria. These factors should be considered when managing this complex patient group, emphasising the importance of broad-spectrum antibiotic prophylaxis and preoperative sampling to ensure appropriate perioperative antibiotic cover.

Keywords: microbiology, periprosthetic Joint infection, knee arthroplasty, endoprostheses

Procedia PDF Downloads 91
362 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation

Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans

Abstract:

The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.

Keywords: CO2, intensive care, plastic, transport

Procedia PDF Downloads 142
361 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 212
360 The Moderation Effect of Financial Distress on the Relationship Between Market Power and Earnings Management of Firms

Authors: Shazia Ali, Yves Mard, Éric Severin

Abstract:

To the best of our knowledge, this is the first study to have analyzed the impact of a) firm-specific product-market power and b) industry competition on earnings management behavior of European firms in distress versus healthy years while controlling for firm-level characteristics. We predicted a significant relationship between firms’ product market power and earnings management tools and their trade-off under the moderation effect of financial distress. We found that the firm-level market power hereinafter referred to as MP (proxied by the industry-adjusted Lerner Index) is positively associated with both real and accrual earnings management. However, MP is associated with a higher level of real earnings management compared to accrual earnings management in distress years compared to healthy years. On the other hand, industry product market power (representing low competition and proxied by the inverse of the total number of firms in an industry hereinafter referred to as NUMB) and firms product market power (proxied by firm market share hereinafter referred to as MS) are associated with lower inflationary accruals and higher deflationary accruals respectively. On the other hand, they are found to be linked with higher real earnings management in distress versus healthy years. When we divided the sample into small and big firms based on their respective industry-year median total assets, we found that all three measures of industry competition (Industry Median Lerner Index (hereinafter referred to as IMLI), NUMB, and Herfindahl–Hirschman Index (hereinafter referred to as HHI) indicate that small firms in low-competitive industries in financial distress are more likely to inflate their earnings through discretionary accruals. While big firms in this situation are more likely to lower the use of both inflationary and deflationary discretionary accruals as indicated by IMLI and HHI and trade-off accruals earnings management for real earnings management as indicated by NUMB. Moreover, IMLI and HHI did not show any interesting results when we divided the sample based on the firm Lerner Index/Market Power. However, the distressed firms with high market power (MP>industry median) are found to engage in income-decreasing discretionary accruals in low-competitive industries (high NUMB). Whereas firms with low market power in the same industry use downward discretionary accruals but inflate income using real activities (abnCFO). Our findings are robust across alternate measures of discretionary accruals and financial distress, such as the Altman Z-Score. The finding of the study is valuable for accounting standard setters, competition authorities, policymakers, and investors alike to help in informed decision-making.

Keywords: financial distress, earnings management, market competition

Procedia PDF Downloads 92
359 Bivariate Analyses of Factors That May Influence HIV Testing among Women Living in the Democratic Republic of the Congo

Authors: Danielle A. Walker, Kyle L. Johnson, Patrick J. Fox, Jacen S. Moore

Abstract:

The HIV Continuum of Care has become a universal model to provide context for the process of HIV testing, linkage to care, treatment, and viral suppression. HIV testing is the first step in moving toward community viral suppression. Countries with a lower socioeconomic status experience the lowest rates of testing and access to care. The Democratic Republic of the Congo is located in the heart of sub-Saharan Africa, where testing and access to care are low and women experience higher HIV prevalence compared to men. In the Democratic Republic of the Congo there is only a 21.6% HIV testing rate among women. Because a critical gap exists between a woman’s risk of contracting HIV and the decision to be tested, this study was conducted to obtain a better understanding of the relationship between factors that could influence HIV testing among women. The datasets analyzed were from the 2013-14 Democratic Republic of the Congo Demographic and Health Survey Program. The data was subset for women with an age range of 18-49 years. All missing cases were removed and one variable was recoded. The total sample size analyzed was 14,982 women. The results showed that there did not seem to be a difference in HIV testing by mean age. Out of 11 religious categories (Catholic, Protestant, Armee de salut, Kimbanguiste, Other Christians, Muslim, Bundu dia kongo, Vuvamu, Animist, no religion, and other), those who identified as Other Christians had the highest testing rate of 25.9% and those identified as Vuvamu had a 0% testing rate (p<0.001). There was a significant difference in testing by religion. Only 0.7% of women surveyed identified as having no religious affiliation. This suggests partnerships with key community and religious leaders could be a tool to increase testing. Over 60% of women who had never been tested for HIV did not know where to be tested. This highlights the need to educate communities on where testing facilities can be located. Almost 80% of women who believed HIV could be transmitted by supernatural means and/or witchcraft had never been tested before (p=0.08). Cultural beliefs could influence risk perception and testing decisions. Consequently, misconceptions need to be considered when implementing HIV testing and prevention programs. Location by province, years of education, and wealth index were also analyzed to control for socioeconomic status. Kinshasa had the highest testing rate of 54.2% of women living there, and both Equateur and Kasai-Occidental had less than a 10% testing rate (p<0.001). As the education level increased up to 12 years, testing increased (p<0.001). Women within the highest quintile of the wealth index had a 56.1% testing rate, and women within the lowest quintile had a 6.5% testing rate (p<0.001). This study concludes that further research is needed to identify culturally competent methods to increase HIV education programs, build partnerships with key community leaders, and improve knowledge on access to care.

Keywords: Democratic Republic of the Congo, cultural beliefs, education, HIV testing

Procedia PDF Downloads 266
358 A Study to Explore the Effectiveness of an Educational Program on Awareness of Cancer Signs, Symptoms, and Risk Factors Among School Students in Oman

Authors: Khadija Al-Hosni, Moon Fai Chan, Mohammed Al-Azri

Abstract:

Background: Several studies suggest that most school-age adolescents are poorly informed on cancer warning signs and risk factors. Providing adolescents with sufficient knowledge would increase their awareness in adulthood and improve seeking behaviors later. Significant: The results will provide a clear vision in assisting key decision-makers in formulating policies on the students' awareness programs towards cancer. So, the likelihood of avoiding cancer in the future will be increased or even promote early diagnosis. Objectives: to evaluate the effectiveness of an education program designed to increase awareness of cancer signs and symptoms risk factors, improve the behavior of seeking help among school students in Oman, and address the barriers to obtaining medical help. Methods: A randomized controlled trial with two groups was conducted in Oman. A total of 1716 students (n=886/control, n= 830/education), aged 15-17 years, at 10th and 11th grade from 12 governmental schools 3 in governorates from 20-February-2022 to 12-May-2022. Basic demographic data were collected, and the Cancer Awareness Measure (CAM) was used as the primary outcome. Data were collected at baseline (T0) and 4 weeks after (T1). The intervention group received an education program about cancer's cause and its signs and symptoms. In contrast, the control group did not receive any education related to this issue during the study period. Non-parametric tests were used to compare the outcomes between groups. Results: At T0, the lamp was the most recognized cancer warning sign in the control (55.0%) and intervention (55.2%) groups. However, there were no significant changes at T1 for all signs in the control group. In contrast, all sign outcomes were improved significantly (p<0.001) in the intervention group, and the highest response was unexplained pain (93.3%). Smoking was the most recognized risk factor in both groups: (82.8% for control; 84.1% for intervention) at T0. However, there was no significant change in T1 for the control group, but there was for the intervention group (p<0.001), the highest identification was smoking cigarettes (96.5%). Too scared was the largest barrier to seeking medical help by students in the control group at T0 (63.0%) and T1 (62.8%). However, there were no significant changes in all barriers in this group. Otherwise, being too embarrassed (60.2%) was the largest barrier to seeking medical help for students in the intervention group at T0 and too scared (58.6%) at T1. Although there were reductions in all barriers, significant differences were found in six of ten only (p<0.001). Conclusion: The intervention was effective in improving students' awareness of cancer symptoms, warning signs (p<0.001), and risk factors (p<0.001 reduced the most addressed barriers to seeking medical help (p<0.001) in comparison to the control group. The Ministry of Education in Oman could integrate awareness of cancer within the curriculum, and more interventions are needed on the sociological part to overcome the barriers that interfere with seeking medical help.

Keywords: adolescents, awareness, cancer, education, intervention, student

Procedia PDF Downloads 78
357 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 377