Search results for: marketing performance output factors
1075 The Increasing Trend in Research Among Orthopedic Residency Applicants is Significant to Matching: A Retrospective Analysis
Authors: Nickolas A. Stewart, Donald C. Hefelfinger, Garrett V. Brittain, Timothy C. Frommeyer, Adrienne Stolfi
Abstract:
Orthopedic surgery is currently considered one of the most competitive specialties that medical students can apply to for residency training. As evidenced by increasing United States Medical Licensing Examination (USMLE) scores, overall grades, and publication, presentation, and abstract numbers, this specialty is getting increasingly competitive. The recent change of USMLE Step 1 scores to pass/fail has resulted in additional challenges for medical students planning to apply for orthopedic residency. Until now, these scores have been a tool used by residency programs to screen applicants as an initial factor to determine the strength of their application. With USMLE STEP 1 converting to a pass/fail grading criterion, the question remains as to what will take its place on the ERAS application. The primary objective of this study is to determine the trends in the number of research projects, abstracts, presentations, and publications among orthopedic residency applicants. Secondly, this study seeks to determine if there is a relationship between the number of research projects, abstracts, presentations, and publications, and match rates. The researchers utilized the National Resident Matching Program's Charting Outcomes in the Match between 2007 and 2022 to identify mean publications and research project numbers by allopathic and osteopathic US orthopedic surgery senior applicants. A paired t test was performed between the mean number of publications and research projects by matched and unmatched applicants. Additionally, simple linear regressions within matched and unmatched applicants were used to determine the association between year and number of abstracts, presentations, and publications, and a number of research projects. For determining whether the increase in the number of abstracts, presentations, and publications, and a number of research projects is significantly different between matched and unmatched applicants, an analysis of covariance is used with an interaction term added to the model, which represents the test for the difference between the slopes of each group. The data shows that from 2007 to 2022, the average number of research publications increased from 3 to 16.5 for matched orthopedic surgery applicants. The paired t-test had a significant p-value of 0.006 for the number of research publications between matched and unmatched applicants. In conclusion, the average number of publications for orthopedic surgery applicants has significantly increased for matched and unmatched applicants from 2007 to 2022. Moreover, this increase has accelerated in recent years, as evidenced by an increase of only 1.5 publications from 2007 to 2001 versus 5.0 publications from 2018 to 2022. The number of abstracts, presentations, and publications is a significant factor regarding an applicant's likelihood to successfully match into an orthopedic residency program. With USMLE Step 1 being converted to pass/fail, the researchers expect students and program directors will place increased importance on additional factors that can help them stand out. This study demonstrates that research will be a primary component in stratifying future orthopedic surgery applicants. In addition, this suggests the average number of research publications will continue to accelerate. Further study is required to determine whether this growth is sustainable.Keywords: publications, orthopedic surgery, research, residency applications
Procedia PDF Downloads 1311074 A Protocol Study of Accessibility: Physician’s Perspective Regarding Disability and Continuum of Care
Authors: Sidra Jawed
Abstract:
The accessibility constructs and the body privilege discourse has been a major problem while dealing with health inequities and inaccessibility. The inherent problem in this arbitrary view of disability is that disability would never be the productive way of living. For past thirty years, disability activists have been working to differentiate ‘impairment’ from ‘disability’ and probing for more understanding of limitation imposed by society, this notion is ultimately known as the Social Model of Disability. The vulnerable population as disability community remains marginalized and seen relentlessly fighting to highlight the importance of social factors. It does not only constitute physical architectural barriers and famous blue symbol of access to the healthcare but also invisible, intangible barriers as attitudes and behaviours. Conventionally the idea of ‘disability’ has been laden with prejudiced perception amalgamating with biased attitude. Equity in contemporary setup necessitates the restructuring of organizational structure. Apparently simple, the complex interplay of disability and contemporary healthcare set up often ends up at negotiating vital components of basic healthcare needs. The role of society is indispensable when it comes to people with disability (PWD), everything from the access to healthcare to timely interventions are strongly related to the set up in place and the attitude of healthcare providers. It is vital to understand the association between assumptions and the quality of healthcare PWD receives in our global healthcare setup. Most of time the crucial physician-patient relationship with PWD is governed by the negative assumptions of the physicians. The multifaceted, troubled patient-physicians’ relationship has been neglected in past. To compound it, insufficient work has been done to explore physicians’ perspective about the disability and access to healthcare PWD have currently. This research project is directed towards physicians’ perspective on the intersection of health and access of healthcare for PWD. The principal aim of the study is to explore the perception of disability in family medicine physicians, highlighting the underpinning of medical perspective in healthcare institution. In the quest of removing barriers, the first step must be to identify the barriers and formulate a plan for future policies, involving all the stakeholders. There would be semi-structured interviews to explore themes as accessibility, medical training, construct of social model and medical model of disability, time limitations, financial constraints. The main research interest is to identify the obstacles to inclusion and marginalization continuing from the basic living necessities to wide health inequity in present society. Physicians point of view is largely missing from the research landscape and the current forum of knowledge with regards to physicians’ standpoint. This research will provide policy makers with a starting point and comprehensive background knowledge that can be a stepping stone for future researches and furthering the knowledge translation process to strengthen healthcare. Additionally, it would facilitate the process of knowledge translation between the much needed medical and disability community.Keywords: disability, physicians, social model, accessibility
Procedia PDF Downloads 2241073 Invasive Asian Carp Fish Species: A Natural and Sustainable Source of Methionine for Organic Poultry Production
Authors: Komala Arsi, Ann M. Donoghue, Dan J. Donoghue
Abstract:
Methionine is an essential dietary amino acid necessary to promote growth and health of poultry. Synthetic methionine is commonly used as a supplement in conventional poultry diets and is temporarily allowed in organic poultry feed for lack of natural and organically approved sources of methionine. It has been a challenge to find a natural, sustainable and cost-effective source for methionine which reiterates the pressing need to explore potential alternatives of methionine for organic poultry production. Fish have high concentrations of methionine, but wild-caught fish are expensive and adversely impact wild fish populations. Asian carp (AC) is an invasive species and its utilization has the potential to be used as a natural methionine source. However, to our best knowledge, there is no proven technology to utilize this fish as a methionine source. In this study, we co-extruded Asian carp and soybean meal to form a dry-extruded, methionine-rich AC meal. In order to formulate rations with the novel extruded carp meal, the product was tested on cecectomized roosters for its amino acid digestibility and total metabolizable energy (TMEn). Excreta was collected and the gross energy, protein content of the feces was determined to calculate Total Metabolizable Energy (TME). The methionine content, digestibility and TME values were greater for the extruded AC meal than control diets. Carp meal was subsequently tested as a methionine source in feeds formulated for broilers, and production performance (body weight gain and feed conversion ratio) was assessed in comparison with broilers fed standard commercial diets supplemented with synthetic methionine. In this study, broiler chickens were fed either a control diet with synthetic methionine or a treatment diet with extruded AC meal (8 replicates/treatment; n=30 birds/replicate) from day 1 to 42 days of age. At the end of the trial, data for body weights, feed intake and feed conversion ratio (FCR) was analyzed using one-way ANOVA with Fisher LSD test for multiple comparisons. Results revealed that birds on AC diet had body weight gains and feed intake comparable to diets containing synthetic methionine (P > 0.05). Results from the study suggest that invasive AC-derived fish meal could potentially be an effective and inexpensive source of sustainable natural methionine for organic poultry farmers.Keywords: Asian carp, methionine, organic, poultry
Procedia PDF Downloads 1581072 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers
Procedia PDF Downloads 1491071 Using Differentiated Instruction Applying Cognitive Approaches and Strategies for Teaching Diverse Learners
Authors: Jolanta Jonak, Sylvia Tolczyk
Abstract:
Educational systems are tasked with preparing students for future success in academic or work environments. Schools strive to achieve this goal, but often it is challenging as conventional teaching approaches are often ineffective in increasingly diverse educational systems. In today’s ever-increasing global society, educational systems become increasingly diverse in terms of cultural and linguistic differences, learning preferences and styles, ability and disability. Through increased understanding of disabilities and improved identification processes, students having some form of disabilities tend to be identified earlier than in the past, meaning that more students with identified disabilities are being supported in our classrooms. Also, a large majority of students with disabilities are educated in general education environments. Due to cognitive makeup and life experiences, students have varying learning styles and preferences impacting how they receive and express what they are learning. Many students come from bi or multilingual households and with varying proficiencies in the English language, further impacting their learning. All these factors need to be seriously considered when developing learning opportunities for student's. Educators try to adjust their teaching practices as they discover that conventional methods are often ineffective in reaching each student’s potential. Many teachers do not have the necessary educational background or training to know how to teach students whose learning needs are more unique and may vary from the norm. This is further complicated by the fact that many classrooms lack consistent access to interventionists/coaches that are adequately trained in evidence-based approaches to meet the needs of all students, regardless of what their academic needs may be. One evidence-based way for providing successful education for all students is by incorporating cognitive approaches and strategies that tap into affective, recognition, and strategic networks in the student's brain. This can be done through Differentiated Instruction (DI). Differentiated Instruction is increasingly recognized model that is established on the basic principles of Universal Design for Learning. This form of support ensures that regardless of the students’ learning preferences and cognitive learning profiles, they have opportunities to learn through approaches that are suitable to their needs. This approach improves the educational outcomes of students with special needs and it benefits other students as it accommodates learning styles as well as the scope of unique learning needs that are evident in the typical classroom setting. Differentiated Instruction also is recognized as an evidence-based best practice in education and is highly effective when it is implemented within the tiered system of the Response to Intervention (RTI) model. Recognition of DI becomes more common; however, there is still limited understanding of the effective implementation and use of strategies that can create unique learning environments for each student within the same setting. Through employing knowledge of a variety of instructional strategies, general and special education teachers can facilitate optimal learning for all students, with and without a disability. A desired byproduct of DI is that it can eliminate inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability.Keywords: differentiated instruction, universal design for learning, special education, diversity
Procedia PDF Downloads 2221070 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 1611069 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 1311068 Nanoporous Activated Carbons for Fuel Cells and Supercapacitors
Authors: A. Volperts, G. Dobele, A. Zhurinsh, I. Kruusenberg, A. Plavniece, J. Locs
Abstract:
Nowadays energy consumption constantly increases and development of effective and cheap electrochemical sources of power, such as fuel cells and electrochemical capacitors, is topical. Due to their high specific power, charge and discharge rates, working lifetime supercapacitor based energy accumulation systems are more and more extensively being used in mobile and stationary devices. Lignocellulosic materials are widely used as precursors and account for around 45% of the total raw materials used for the manufacture of activated carbon which is the most suitable material for supercapacitors. First part of our research is devoted to study of influence of main stages of wood thermochemical activation parameters on activated carbons porous structure formation. It was found that the main factors governing the properties of carbon materials are specific surface area, volume and pore size distribution, particles dispersity, ash content and oxygen containing groups content. Influence of activated carbons attributes on capacitance and working properties of supercapacitor are demonstrated. The correlation between activated carbons porous structure indices and electrochemical specifications of supercapacitors with electrodes made from these materials has been determined. It is shown that if synthesized activated carbons are used in supercapacitors then high specific capacitances can be reached – more than 380 F/g in 4.9M sulfuric acid based electrolytes and more than 170 F/g in 1 M tetraethylammonium tetrafluoroborate in acetonitrile electrolyte. Power specifications and minimal price of H₂-O₂ fuel cells are limited by the expensive platinum-based catalysts. The main direction in development of non-platinum catalysts for the oxygen reduction is the study of cheap porous carbonaceous materials which can be obtained by the pyrolysis of polymers including renewable biomass. It is known that nitrogen atoms in carbon materials to a high degree determine properties of the doped activated carbons, such as high electrochemical stability, hardness, electric resistance, etc. The lack of sufficient knowledge on the doping of the carbon materials calls for the ongoing researches of properties and structure of modified carbon matrix. In the second part of this study, highly porous activated carbons were synthesized using alkali thermochemical activation from wood, cellulose and cellulose production residues – craft lignin and sewage sludge. Activated carbon samples were doped with dicyandiamide and melamine for the application as fuel cell cathodes. Conditions of nitrogen introduction (solvent, treatment temperature) and its content in the carbonaceous material, as well as porous structure characteristics, such as specific surface and pore size distribution, were studied. It was found that efficiency of doping reaction depends on the elemental oxygen content in the activated carbon. Relationships between nitrogen content, porous structure characteristics and electrodes electrochemical properties are demonstrated.Keywords: activated carbons, low-temperature fuel cells, nitrogen doping, porous structure, supercapacitors
Procedia PDF Downloads 1211067 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1251066 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 3371065 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques
Authors: Imed Feki, Faouzi Msahli
Abstract:
Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique
Procedia PDF Downloads 6051064 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 1471063 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms
Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan
Abstract:
With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves
Procedia PDF Downloads 1891062 Radiation Stability of Structural Steel in the Presence of Hydrogen
Authors: E. A. Krasikov
Abstract:
As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.Keywords: hydrogen, radiation, stability, structural steel
Procedia PDF Downloads 2731061 Single-Parent Families and Its Impact on the Psycho Child Development in Schools
Authors: Sylvie Sossou, Grégoire Gansou, Ildevert Egue
Abstract:
Introduction: The mission of the family and the school is to educate and train citizens of the city. But the family’s values , parental roles, respect for life collapse in their traditional African form. Indeed laxity with regard to divorce, liberal ideas about child rearing influence the emotional life of the latter. Several causes may contribute to the decline in academic performance. In order to seek a psychological solution to the issue, a study was conducted in 6 schools at the 9th district in Cotonou, cosmopolitan city of Benin. Objective: To evaluate the impact of single parenthood on the psycho child development. Materials and Methods: Questionnaires and interviews were used to gather verbal information. The questionnaires were administered to parents and children (schoolchildren 4, 5 and six form) from 7 to 12 years in lone parenthood. The interview was done with teachers and school leaders. We identified 209 cases of children living with a "single-parent" and 68 single parents. Results: Of the 209 children surveyed the results showed that 116 children are cut relational triangle in early childhood (before 3 years). The psychological effects showed that the separation has caused sadness for 52 children, anger 22, shame 17, crying at 31 children, fear for 14, the silence at 58 children. In front of complete family’s children, these children experience feelings of aggression in 11.48%; sadness in 30.64%; 5.26% the shame, the 6.69% tears; jealousy in 2.39% and 2.87% of indifference. The option to get married in 44.15% of children is a challenge to want to give a happy childhood for their offspring; 22.01% feel rejected, there is uncertainty for 11.48% of cases and 25.36% didn’t give answer. 49, 76% of children want to see their family together; 7.65% are against to avoid disputes and in many cases to save the mother of the father's physical abuse. 27.75% of the ex-partners decline responsibility in the care of the child. Furthermore family difficulties affecting the intellectual capacities of children: 37.32% of children see school difficulties related to family problems despite all the pressure single-parent to see his child succeed. Single parenthood affects inter-family relations: pressure 33.97%; nervousness 24.88%; overprotection 29.18%; backbiting 11.96%, are the lives of these families. Conclusion: At the end of the investigation, results showed that there is a causal relationship between psychological disorders, academic difficulties of children and quality of parental relationships. Other cases may exist, but the lack of resources meant that we have only limited at 6 schools. Early psychological treatment for these children is needed.Keywords: single-parent, psycho child, school, Cotonou
Procedia PDF Downloads 3911060 Critical Core Skills Profiling in the Singaporean Workforce
Authors: Bi Xiao Fang, Tan Bao Zhen
Abstract:
Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.Keywords: employability, skills profiling, skills measurement, soft skills
Procedia PDF Downloads 961059 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 3631058 Radical Scavenging Activity of Protein Extracts from Pulse and Oleaginous Seeds
Authors: Silvia Gastaldello, Maria Grillo, Luca Tassoni, Claudio Maran, Stefano Balbo
Abstract:
Antioxidants are nowadays attractive not only for the countless benefits to the human and animal health, but also for the perspective of use as food preservative instead of synthetic chemical molecules. In this study, the radical scavenging activity of six protein extracts from pulse and oleaginous seeds was evaluated. The selected matrices are Pisum sativum (yellow pea from two different origins), Carthamus tinctorius (safflower), Helianthus annuus (sunflower), Lupinus luteus cv Mister (lupin) and Glycine max (soybean), since they are economically interesting for both human and animal nutrition. The seeds were grinded and proteins extracted from 20mg powder with a specific vegetal-extraction kit. Proteins have been quantified through Bradford protocol and scavenging activity was revealed using DPPH assay, based on radical DPPH (2,2-diphenyl-1-picrylhydrazyl) absorbance decrease in the presence of antioxidants molecules. Different concentrations of the protein extract (1, 5, 10, 50, 100, 500 µg/ml) were mixed with DPPH solution (DPPH 0,004% in ethanol 70% v/v). Ascorbic acid was used as a scavenging activity standard reference, at the same six concentrations of protein extracts, while DPPH solution was used as control. Samples and standard were prepared in triplicate and incubated for 30 minutes in dark at room temperature, the absorbance was read at 517nm (ABS30). Average and standard deviation of absorbance values were calculated for each concentration of samples and standard. Statistical analysis using t-students and p-value were performed to assess the statistical significance of the scavenging activity difference between the samples (or standard) and control (ABSctrl). The percentage of antioxidant activity has been calculated using the formula [(ABSctrl-ABS30)/ABSctrl]*100. The obtained results demonstrate that all matrices showed antioxidant activity. Ascorbic acid, used as standard, exhibits a 96% scavenging activity at the concentration of 500 µg/ml. At the same conditions, sunflower, safflower and yellow peas revealed the highest antioxidant performance among the matrices analyzed, with an activity of 74%, 68% and 70% respectively (p < 0.005). Although lupin and soybean exhibit a lower antioxidant activity compared to the other matrices, they showed a percentage of 46 and 36 respectively. All these data suggest the possibility to use undervalued edible matrices as antioxidants source. However, further studies are necessary to investigate a possible synergic effect of several matrices as well as the impact of industrial processes for a large-scale approach.Keywords: antioxidants, DPPH assay, natural matrices, vegetal proteins
Procedia PDF Downloads 4331057 The Effect of Non-Surgical Periodontal Therapy on Metabolic Control in Children
Authors: Areej Al-Khabbaz, Swapna Goerge, Majedah Abdul-Rasoul
Abstract:
Introduction: The most prevalent periodontal disease among children is gingivitis, and it usually becomes more severe in adolescence. A number of intervention studies suggested that resolution of periodontal inflammation can improve metabolic control in patients diagnosed with diabetes mellitus. Aim: to assess the effect of non-surgical periodontal therapy on glycemic control of children diagnosed with diabetes mellitus. Method: Twenty-eight children diagnosed with diabetes mellitus were recruited with established diagnosis diabetes for at least 1 year. Informed consent and child assent form were obtained from children and parents prior to enrolment. The dental examination for the participants was performed on the same week directly following their annual medical assessment. All patients had their glycosylated hemoglobin (HbA1c%) test one week prior to their annual medical and dental visit and 3 months following non-surgical periodontal therapy. All patients received a comprehensive periodontal examination The periodontal assessment included clinical attachment loss, bleeding on probing, plaque score, plaque index and gingival index. All patients were referred for non-surgical periodontal therapy, which included oral hygiene instruction and motivation followed by supra-gingival and subg-ingival scaling using ultrasonic and hand instruments. Statistical Analysis: Data were entered and analyzed using the Statistical Package for Social Science software (SPSS, Chicago, USA), version 18. Statistical analysis of clinical findings was performed to detect differences between the two groups in term of periodontal findings and HbA1c%. Binary logistic regression analysis was performed in order to examine which factors were significant in multivariate analysis after adjusting for confounding between effects. The regression model used the dependent variable ‘Improved glycemic control’, and the independent variables entered in the model were plaque index, gingival index, bleeding %, plaque Statistical significance was set at p < 0.05. Result: A total of 28 children. The mean age of the participants was 13.3±1.92 years. The study participants were divided into two groups; Compliant group (received dental scaling) and non-complaints group (received oral hygiene instructions only). No statistical difference was found between compliant and non-compliant group in age, gender distribution, oral hygiene practice and the level of diabetes control. There was a significant difference between compliant and non-compliant group in term of improvement of HBa1c before and after periodontal therapy. Mean gingival index was the only significant variable associated with improved glycemic control level. In conclusion, this study has demonstrated that non-surgical mechanical periodontal therapy can improve HbA1c% control. The result of this study confirmed that children with diabetes mellitus who are compliant to dental care and have routine professional scaling may have better metabolic control compared to diabetic children who are erratic with dental care.Keywords: children, diabetes, metabolic control, periodontal therapy
Procedia PDF Downloads 1611056 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 4911055 Requirements for the Development of Competencies to Mentor Trainee Teachers: A Case Study of Vocational Education Cooperating Teachers in Quebec
Authors: Nathalie Gagnon, Andréanne Gagné, Julie Courcy
Abstract:
Quebec's vocational education teachers experience an atypical induction process into the workplace and thus face unique challenges. In contrast to elementary and high school teachers, who must undergo initial teacher training in order to access the profession, vocational education teachers, in most cases, are hired based on their professional expertise in the trade they are teaching, without prior pedagogical training. In addition to creating significant stress, which does not foster the acquisition of teaching roles and skills, this approach also forces recruits into a particular posture during their practical training: that of juggling their dual identities as teacher and trainee simultaneously. Recruits are supported by Cooperating Teachers (CPs) who, as experienced educators, take a critical and constructive look at their practices, observe them in the classroom, give them constructive feedback, and encourage them in their reflective practice. Thus, the vocational setting CP also assumes a distinctive posture and role due to the characteristics of the trainees they support. Although it is recognized that preparation, training, and supervision of CPs are essential factors in improving the support provided to trainees, there is little research about how CPs develop their support skills, and very little research focuses on the distinct posture they occupy. However, in order for them to be properly equipped for the important role they play in recruits’ practical training, it is vital to know more about their experience. An individual’s competencies cannot be studied without first examining what characterizes their experience, how they experience any given situation on cognitive, emotional, and motivational levels, in addition to how they act and react in situ. Depending on its nature, the experience will or will not promote the development of a specific competency. The research from which this communication originates focuses on describing the overall experience of vocational education CP in an effort to better understand the mechanisms linked to the development of their mentoring competencies. Experience and competence were, therefore, the two main theoretical concepts leading the research. As per methodology choices, case study methods were used since it proves to be adequate to describe in a rich and detailed way contemporary phenomena within contexts of life. The set of data used was collected from semi-structured interviews conducted with 15 vocational education CP in Quebec (Canada), followed by the use of a data-driven semi-inductive analysis approach to let the categories emerge organically. Focusing on the development needs of vocational education CP to improve their mentoring skills, this paper presents the results of our research, namely the importance of adequate training, better support offered by university supervisors, greater recognition of their role, and specific time slots dedicated to trainee support. The knowledge resulting from this research could improve the quality of support for trainee teachers in vocational education settings and to a more successful induction into the workplace. This communication also presents recommendations regarding the development of training systems that meet the specific needs of vocational education CP.Keywords: development of competencies, cooperating teacher, mentoring trainee teacher, practical training, vocational education
Procedia PDF Downloads 1181054 Attitudes, Knowledge and Perceptions towards Cervical Cancer Messages among Female University Students
Authors: Anne Nattembo
Abstract:
Cervical cancer remains a major public health problem in developing countries, especially in Africa. Effective cervical cancer prevention communication requires identification of behaviors, attitudes and increasing awareness of a given population; thus this study focused on investigating awareness, attitudes, and behavior among female university students towards cervical cancer messages. The study objectives sought to investigate the communication behavior of young adults towards cervical cancer, to understand female students recognition of cervical cancer as a problem, to identify the frames related to cervical cancer and their impact towards audience communication and participation behaviors, to identify the factors that influence behavioral intentions and level of involvement towards cervical cancer services and to make recommendations on how to improve cervical cancer communication towards female university students. The researcher obtained data using semi-structured interviews and focus group discussions targeting 90 respondents. The semi-structured in-depth interviews were carried out through one-on-one discussions basis using a set of prepared questions among 53 respondents. All interviews were audio-tape recorded. Each interview was directly typed into Microsoft Word. 4 focus group discussions were conducted with a total of 37 respondents; 2 female only groups with 10 respondents in one and 9 respondents in another, 1 mixed with 12 participants 5 of whom were male, and 1 male only group with 6 participants. The key findings show that the participants preferred to receive and access cervical cancer information from doctors although they were mainly receiving information from the radio. In regards to the type of public the respondents represent, majority of the respondents were non-publics in the sense that they did not have knowledge about cervical cancer, had low levels of involvement and had high constraint recognition their cervical cancer knowledge levels. The researcher identified the most salient audience frames among female university students towards cervical cancer and these included; death, loss, and fear. These frames did not necessarily make cervical cancer an issue of concern among the female university students but rather an issue they distanced themselves from as they did not perceive it as a risk. The study also identified the constraints respondents face in responding to cervical cancer campaign calls-to-action which included; stigma, lack of knowledge and access to services as well as lack of recommendation from doctors. In regards to sex differences, females had more knowledge about cervical cancer than the males. In conclusion the study highlights the importance of interpersonal communication in risk or health communication with a focus on health providers proactively sharing cervical cancer prevention information with their patients. Health provider’s involvement in cervical cancer is very important in influencing behavior and compliance of cervical cancer calls-to-action. The study also provides recommendations for designing effective cervical cancer campaigns that will positively impact on the audience such as packaging cervical cancer messages that also target the males as a way of increasing their involvement and more campaigns to increase awareness of cervical cancer as well as designing positive framed messages to counter the negative audience frames towards cervical cancer.Keywords: cervical cancer communication, health communication, university students, risk communication
Procedia PDF Downloads 2341053 Carbon Footprint Assessment and Application in Urban Planning and Geography
Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim
Abstract:
Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.Keywords: carbon footprint, case study, geography, urban planning
Procedia PDF Downloads 2891052 Part Variation Simulations: An Industrial Case Study with an Experimental Validation
Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru
Abstract:
Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation
Procedia PDF Downloads 1791051 Knowledge, Attitude and Beliefs Towards Polypharmacy Amongst Older People Attending Family Medicine Clinic at the Aga Khan University Hospital, Nairobi, Kenya (AKUHN) Sub-Saharan Africa-Qualitative Study
Authors: Maureen Kamau, Gulnaz Mohamoud, Adelaide Lusambili, Njeri Nyanja
Abstract:
Life expectancy has increased over the last century amongst older individuals, and in particular, those 60 years and over. The World Health Organization estimates that the world's population of persons over 60 years will rise to 22 per cent by the year 2050. Ageing is associated with increasing disability, multiple chronic conditions, and an increase in the use of health services. These multiple chronic conditions are managed with polypharmacy. Polypharmacy has numerous adverse effects including non-adherence, poor compliance to the various medications, reduced appetite, and risk of fall. Studies on polypharmacy and ageing are few and poorly understood especially in low and middle - income countries. The aim of this study was to explore the knowledge, attitudes and beliefs of older people towards polypharmacy. A qualitative study of 15 patients aged 60 years and above, taking more than five medications per day were conducted at the Aga Khan University using Semi-structured in-depth interviews. Three interviews were pilot interviews, and data analysis was performed on 12 interviews. Data were analyzed using NVIVO 12 software. A thematic qualitative analysis was carried out guided by Braun and Clarke (2006) framework. Themes identified; - knowledge of their co-morbidities and of the medication that older persons take, sources of information about medicines, and storage of the medication, experiences and attitudes of older patients towards polypharmacy both positive and negative, older peoples beliefs and their coping mechanisms with polypharmacy. The study participants had good knowledge on their multiple co-morbidities, and on the medication they took. The patients had positive attitudes towards medication as it enhanced their health and well-being, and enabled them to perform their activities of daily living. There was a strong belief among older patients that the medications were necessary for their health. All these factors enhanced compliance to the multiple medication. However, some older patients had negative attitudes due to the pill burden, side effects of the medication, and stigma associated with being ill. Cost of healthcare was a concern, with most of the patients interviewed relying on insurance to cover the cost of their medication. Older patients had accepted that the medication they were prescribed were necessary for their health, as it enabled them to complete their activities of daily living. Some concerns about the side effects of the medication arose, and brought about the need for patient education that would ensure that the patients are aware of the medications they take, and potential side effects. The effect that the COVID 19 pandemic had in the healthcare of the older patients was evident by the number of the older patients avoided coming to the hospital during the period of the pandemic. The relationship with the primary care physician and the older patients is an important one, especially in LMICs such as Kenya, as many of the older patients trusted the doctors wholeheartedly to make the best decision about their health and about their medication. Prescription review is important to avoid the use of potentially inappropriate medication.Keywords: polypharmacy, older patients, multiple chronic conditions, Kenya, Africa, qualitative study, indepth interviews, primary care
Procedia PDF Downloads 1001050 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 1001049 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 661048 Modification of Aliphatic-Aromatic Copolyesters with Polyether Block for Segmented Copolymers with Elastothemoplastic Properties
Authors: I. Irska, S. Paszkiewicz, D. Pawlikowska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Due to the number of advantages such as high tensile strength, sensitivity to hydrolytic degradation, and biocompatibility poly(lactic acid) (PLA) is one of the most common polyesters for biomedical and pharmaceutical applications. However, PLA is a rigid, brittle polymer with low heat distortion temperature and slow crystallization rate. In order to broaden the range of PLA applications, it is necessary to improve these properties. In recent years a number of new strategies have been evolved to obtain PLA-based materials with improved characteristics, including manipulation of crystallinity, plasticization, blending, and incorporation into block copolymers. Among the other methods, synthesis of aliphatic-aromatic copolyesters has been attracting considerable attention as they may combine the mechanical performance of aromatic polyesters with biodegradability known from aliphatic ones. Given the need for highly flexible biodegradable polymers, in this contribution, a series of aromatic-aliphatic based on poly(butylene terephthalate) and poly(lactic acid) (PBT-b-PLA) copolyesters exhibiting superior mechanical properties were copolymerized with an additional poly(tetramethylene oxide) (PTMO) soft block. The structure and properties of both series were characterized by means of attenuated total reflectance – Fourier transform infrared spectroscopy (ATR-FTIR), nuclear magnetic resonance spectroscopy (¹H NMR), differential scanning calorimetry (DSC), wide-angle X-ray scattering (WAXS) and dynamic mechanical, thermal analysis (DMTA). Moreover, the related changes in tensile properties have been evaluated and discussed. Lastly, the viscoelastic properties of synthesized poly(ester-ether) copolymers were investigated in detail by step cycle tensile tests. The block lengths decreased with the advance of treatment, and the block-random diblock terpolymers of (PBT-ran-PLA)-b-PTMO were obtained. DSC and DMTA analysis confirmed unambiguously that synthesized poly(ester-ether) copolymers are microphase-separated systems. The introduction of polyether co-units resulted in a decrease in crystallinity degree and melting temperature. X-ray diffraction patterns revealed that only PBT blocks are able to crystallize. The mechanical properties of (PBT-ran-PLA)-b-PTMO copolymers are a result of a unique arrangement of immiscible hard and soft blocks, providing both strength and elasticity.Keywords: aliphatic-aromatic copolymers, multiblock copolymers, phase behavior, thermoplastic elastomers
Procedia PDF Downloads 1401047 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 751046 Estimating Multidimensional Water Poverty Index in India: The Alkire Foster Approach
Authors: Rida Wanbha Nongbri, Sabuj Kumar Mandal
Abstract:
The Sustainable Development Goals (SDGs) for 2016-2030 were adopted in response to Millennium Development Goals (MDGs) which focused on access to sustainable water and sanitations. For over a decade, water has been a significant subject that is explored in various facets of life. Our day-to-day life is significantly impacted by water poverty at the socio-economic level. Reducing water poverty is an important policy challenge, particularly in emerging economies like India, owing to its population growth, huge variation in topology and climatic factors. To design appropriate water policies and its effectiveness, a proper measurement of water poverty is essential. In this backdrop, this study uses the Alkire Foster (AF) methodology to estimate a multidimensional water poverty index for India at the household level. The methodology captures several attributes to understand the complex issues related to households’ water deprivation. The study employs two rounds of Indian Human Development Survey data (IHDS 2005 and 2012) which focuses on 4 dimensions of water poverty including water access, water quantity, water quality, and water capacity, and seven indicators capturing these four dimensions. In order to quantify water deprivation at the household level, an AF dual cut-off counting method is applied and Multidimensional Water Poverty Index (MWPI) is calculated as the product of Headcount Ratio (Incidence) and average share of weighted dimension (Intensity). The results identify deprivation across all dimensions at the country level and show that a large proportion of household in India is deprived of quality water and suffers from water access in both 2005 and 2012 survey rounds. The comparison between the rural and urban households shows that higher ratio of the rural households are multidimensionally water poor as compared to their urban counterparts. Among the four dimensions of water poverty, water quality is found to be the most significant one for both rural and urban households. In 2005 round, almost 99.3% of households are water poor for at least one of the four dimensions, and among the water poor households, the intensity of water poverty is 54.7%. These values do not change significantly in 2012 round, but we could observe significance differences across the dimensions. States like Bihar, Tamil Nadu, and Andhra Pradesh are ranked the most in terms of MWPI, whereas Sikkim, Arunachal Pradesh and Chandigarh are ranked the lowest in 2005 round. Similarly, in 2012 round, Bihar, Uttar Pradesh and Orissa rank the highest in terms of MWPI, whereas Goa, Nagaland and Arunachal Pradesh rank the lowest. The policy implications of this study can be multifaceted. It can urge the policy makers to focus either on the impoverished households with lower intensity levels of water poverty to minimize total number of water poor households or can focus on those household with high intensity of water poverty to achieve an overall reduction in MWPI.Keywords: .alkire-foster (AF) methodology, deprivation, dual cut-off, multidimensional water poverty index (MWPI)
Procedia PDF Downloads 70