Search results for: number system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25495

Search results for: number system

2155 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 94
2154 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 423
2153 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.

Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys

Procedia PDF Downloads 245
2152 Cadaveric Dissection versus Systems-Based Anatomy: Testing Final Year Student Surface Anatomy Knowledge to Compare the Long-Term Effectiveness of Different Course Structures

Authors: L. Sun, T. Hargreaves, Z. Ahmad

Abstract:

Newly-qualified Foundation Year 1 doctors in the United Kingdom are frequently expected to perform practical skills involving the upper limb in clinical practice (for example, venipuncture, cannulation, and blood gas sampling). However, a move towards systems-based undergraduate medical education in the United Kingdom often precludes or limits dedicated time to anatomy teaching with cadavers or prosections, favouring only applied anatomy in the context of pathology. The authors hypothesised that detailed anatomical knowledge may consequently be adversely affected, particularly with respect to long-term retention. A simple picture quiz and accompanying questionnaire testing the identification of 7 upper limb surface landmarks was distributed to a total of 98 final year medical students from two universities - one with a systems-based curriculum, and one with a dedicated longitudinal dissection-based anatomy module in the first year of study. Students with access to dissection and prosection-based anatomy teaching performed more strongly, with a significantly higher rate of correct identification of all but one of the landmarks. Furthermore, it was notable that none of the students who had previously undertaken a systems-based course scored full marks, compared with 20% of those who had participated in the more dedicated anatomy course. This data suggests that a traditional, dissection-based approach to undergraduate anatomy teaching is superior to modern system-based curricula, in terms of aiding long-term retention of anatomical knowledge pertinent to newly-qualified doctors. The authors express concern that this deficit in proficiency could be detrimental to patient care in clinical practice, and propose that, where dissection-led anatomy teaching is not available, further anatomy revision modules are implemented throughout undergraduate education to aid knowledge retention and support clinical excellence.

Keywords: dissection, education, surface anatomy, upper limb

Procedia PDF Downloads 133
2151 Boosting Economic Value in Ghana’s Film Industry: Rethinking Media Policy, Regulation and Copyright Law

Authors: Sela Adjei

Abstract:

This paper aims to rationalize the need for media policy implementation and copyright enforcement to address various challenges faced within Ghana’s film industry. After Ghana transitioned to democratic rule in 1992, critics and media professionals advocated a national media policy. This advocacy subsequently resulted in agitation for media deregulation and loosening of grip on state-owned media organizations. The reinstatement of constitutional rule in 1992 paved the way for the state to lax its monopoly of the media within the democratic context of a free market economy. The National Media Commission proposed a media policy and broadcast bill which was presented to parliament but has still not been passed into law. This legislative lapse partly contributed to the influx of unregulated foreign content. Accessible foreign media content subsequently promoted a system of unfair competition that radically undermined locally produced content, putting a generation of thriving film producers out of work. Drawing on reflections from a series of structured interviews, focus group discussions and creative workshops, the findings of this study maintain that the various challenges confronting Ghanaian filmmakers is centred around inadequate funding opportunities, copyright violation and policy implementation issues. Using the film industry structure and value chain analysis, the various challenges faced by the selected film producers were discussed and critically analyzed. A significant aspect of this study is the solution-driven approach adopted in outlining the practical recommendations that will boost the aesthetic, cultural and economic value of Ghanaian film productions. Based on the discussions and conclusions drawn with the various stakeholders within Ghana’s creative industries, the paper makes a strong case for firm state regulation, copyright enforcement and policy implementation to grow Ghana’s film industry.

Keywords: film, value, copyright, media, policy, culture, regulation, economy

Procedia PDF Downloads 72
2150 Performance and Nutritional Evaluation of Moringa Leaves Dried in a Solar-Assisted Heat Pump Dryer Integrated with Thermal Energy Storage

Authors: Aldé Belgard Tchicaya Loemba, Baraka Kichonge, Thomas Kivevele, Juma Rajabu Selemani

Abstract:

Plants used for medicinal purposes are extremely perishable, owing to moisture-enhanced enzymatic and microorganism activity, climate change, and improper handling and storage. Experiments have shown that drying the medicinal plant without affecting the active nutrients and controlling the moisture content as much as possible can extend its shelf life. Different traditional and modern drying techniques for preserving medicinal plants have been developed, with some still being improved in Sub-Saharan Africa. However, many of these methods fail to address the most common issues encountered when drying medicinal plants, such as nutrient loss, long drying times, and a limited capacity to dry during the evening or cloudy hours. Heat pump drying is an alternate drying method that results in no nutritional loss. Furthermore, combining a heat pump dryer with a solar energy storage system appears to be a viable option for all-weather drying without affecting the nutritional values of dried products. In this study, a solar-assisted heat pump dryer integrated with thermal energy storage is developed for drying moringa leaves. The study also discusses the performance analysis of the developed dryer as well as the proximate analysis of the dried moringa leaves. All experiments were conducted from 11 a.m. to 4 p.m. to assess the dryer's performance in “daytime mode”. Experiment results show that the drying time was significantly reduced, and the dryer demonstrated high performance in preserving all of the nutrients. In 5 hours of the drying process, the moisture content was reduced from 75.7 to 3.3%. The average COP value was 3.36, confirming the dryer's low energy consumption. The findings also revealed that after drying, the content of protein, carbohydrates, fats, fiber, and ash greatly increased.

Keywords: heat pump dryer, efficiency, moringa leaves, proximate analysis

Procedia PDF Downloads 83
2149 Advanced Electron Microscopy Study of Fission Products in a TRISO Coated Particle Neutron Irradiated to 3.6 X 1021 N/cm² Fast Fluence at 1040 ⁰C

Authors: Haiming Wen, Isabella J. Van Rooyen

Abstract:

Tristructural isotropic (TRISO)-coated fuel particles are designed as nuclear fuel for high-temperature gas reactors. TRISO coating consists of layers of carbon buffer, inner pyrolytic carbon (IPyC), SiC, and outer pyrolytic carbon. The TRISO coating, especially the SiC layer, acts as a containment system for fission products produced in the kernel. However, release of certain metallic fission products across intact TRISO coatings has been observed for decades. Despite numerous studies, mechanisms by which fission products migrate across the coating layers remain poorly understood. In this study, scanning transmission electron microscopy (STEM), energy dispersive X-ray spectroscopy (EDS), high-resolution transmission electron microscopy (HRTEM) and electron energy loss spectroscopy (EELS) were used to examine the distribution, composition and structure of fission products in a TRISO coated particle neutron irradiated to 3.6 x 1021 n/cm² fast fluence at 1040 ⁰C. Precession electron diffraction was used to investigate characters of grain boundaries where specific fission product precipitates are located. The retention fraction of 110mAg in the investigated TRISO particle was estimated to be 0.19. A high density of nanoscale fission product precipitates was observed in the SiC layer close to the SiC-IPyC interface, most of which are rich in Pd, while Ag was not identified. Some Pd-rich precipitates contain U. Precipitates tend to have complex structure and composition. Although a precipitate appears to have uniform contrast in STEM, EDS indicated that there may be composition variations throughout the precipitate, and HRTEM suggested that the precipitate may have several parts different in crystal structure or orientation. Attempts were made to measure charge states of precipitates using EELS and study their possible effect on precipitate transport.

Keywords: TRISO particle, fission product, nuclear fuel, electron microscopy, neutron irradiation

Procedia PDF Downloads 268
2148 Mucoadhesive Chitosan-Coated Nanostructured Lipid Carriers for Oral Delivery of Amphotericin B

Authors: S. L. J. Tan, N. Billa, C. J. Roberts

Abstract:

Oral delivery of amphotericin B (AmpB) potentially eliminates constraints and side effects associated with intravenous administration, but remains challenging due to the physicochemical properties of the drug such that it results in meagre bioavailability (0.3%). In an advanced formulation, 1) nanostructured lipid carriers (NLC) were formulated as they can accommodate higher levels of cargoes and restrict drug expulsion and 2) a mucoadhesion feature was incorporated so as to impart sluggish transit of the NLC along the gastrointestinal tract and hence, maximize uptake and improve bioavailability of AmpB. The AmpB-loaded NLC formulation was successfully formulated via high shear homogenisation and ultrasonication. A chitosan coating was adsorbed onto the formed NLC. Physical properties of the formulations; particle size, zeta potential, encapsulation efficiency (%EE), aggregation states and mucoadhesion as well as the effect of the variable pH on the integrity of the formulations were examined. The particle size of the freshly prepared AmpB-loaded NLC was 163.1 ± 0.7 nm, with a negative surface charge and remained essentially stable over 120 days. Adsorption of chitosan caused a significant increase in particle size to 348.0 ± 12 nm with the zeta potential change towards positivity. Interestingly, the chitosan-coated AmpB-loaded NLC (ChiAmpB NLC) showed significant decrease in particle size upon storage, suggesting 'anti-Ostwald' ripening effect. AmpB-loaded NLC formulation showed %EE of 94.3 ± 0.02 % and incorporation of chitosan increased the %EE significantly, to 99.3 ± 0.15 %. This suggests that the addition of chitosan renders stability to the NLC formulation, interacting with the anionic segment of the NLC and preventing the drug leakage. AmpB in both NLC and ChiAmpB NLC showed polyaggregation which is the non-toxic conformation. The mucoadhesiveness of the ChiAmpB NLC formulation was observed in both acidic pH (pH 5.8) and near-neutral pH (pH 6.8) conditions as opposed to AmpB-loaded NLC formulation. Hence, the incorporation of chitosan into the NLC formulation did not only impart mucoadhesive property but also protected against the expulsion of AmpB which makes it well-primed as a potential oral delivery system for AmpB.

Keywords: Amphotericin B, mucoadhesion, nanostructured lipid carriers, oral delivery

Procedia PDF Downloads 163
2147 Enhancing Healthcare Data Protection and Security

Authors: Joseph Udofia, Isaac Olufadewa

Abstract:

Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.

Keywords: cloud security, healthcare, cybersecurity, policy and standard

Procedia PDF Downloads 93
2146 Psychometrics of the Farsi Version of the Newcastle Nursing Care Satisfaction Scale in Patients Admitted to the Internal and General Surgery Departments of Hospitals Affiliated with Ardabil University of Medical Sciences in 2017

Authors: Mansoureh Karimollahi, Mehriar Adrmohammadi, Mohsen Mohammadi

Abstract:

Introduction: Patient satisfaction with nursing care is considered as an important indicator of the quality and effectiveness of the health care system, and improving the quality of care is not possible without paying attention to the opinions and expectations of patients. Considering that the scales for assessing satisfaction with nursing care in our country are not comprehensive and measure very few areas, therefore, in this study, psychometrically, the Persian version of the Newcastle Nursing Care Satisfaction Scale was used in patients hospitalized in the wards. Internal medicine and general surgery were discussed. Methods: This cross-sectional study was conducted on 200 patients admitted to the surgery and internal departments of hospitals affiliated to Ardabil University of Medical Sciences. The Newcastle nursing care satisfaction scale was used for the first time in Iran in comparison with the good nursing care scale from the patients' point of view to evaluate the criterion validity. The Newcastle nursing care satisfaction scale was used after translation, validity, and reliability. Results: The level of satisfaction of patients and the experience of patients with nursing care was at a favorable level, respectively, with an average of 111.8 ± 14.2 and 69.07 ± 14.8. Total CVI was estimated at 0.96 for the experience section, 0.95 for the satisfaction section, and 0.96 for the whole scale. The index (CVR) was also 0.95 for the experience section, 0.95 for the satisfaction section, and 0.95 for the whole scale. Criterion validity was also estimated using 0.725 correlation. The validity of the construct was also confirmed using the goodness of fit index (X2=1932/05, p=0.013, KMO=0.913). Convergent validity was estimated at 0.99 in the experience subscale and 0.98 in the satisfaction subscale. . The overall reliability in the experience subscale and satisfaction subscale was 94%, 92%, and 98%, respectively, which indicated the acceptable reliability of the questionnaire. Conclusion: The Persian version of the Newcastle nursing care satisfaction scale as a comprehensive tool that can be easily completed by patients and is easy to interpret, has good validity and reliability and can be used in patient care centers, in departments Surgery, and internal medicine are recommended.

Keywords: psychometrics, Newcastle nursing care satisfaction scale, nursing care satisfaction, general surgery department

Procedia PDF Downloads 99
2145 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 204
2144 Human Factors Interventions for Risk and Reliability Management of Defence Systems

Authors: Chitra Rajagopal, Indra Deo Kumar, Ila Chauhan, Ruchi Joshi, Binoy Bhargavan

Abstract:

Reliability and safety are essential for the success of mission-critical and safety-critical defense systems. Humans are part of the entire life cycle of defense systems development and deployment. The majority of industrial accidents or disasters are attributed to human errors. Therefore, considerations of human performance and human reliability are critical in all complex systems, including defense systems. Defense systems are operating from the ground, naval and aerial platforms in diverse conditions impose unique physical and psychological challenges to the human operators. Some of the safety and mission-critical defense systems with human-machine interactions are fighter planes, submarines, warships, combat vehicles, aerial and naval platforms based missiles, etc. Human roles and responsibilities are also going through a transition due to the infusion of artificial intelligence and cyber technologies. Human operators, not accustomed to such challenges, are more likely to commit errors, which may lead to accidents or loss events. In such a scenario, it is imperative to understand the human factors in defense systems for better systems performance, safety, and cost-effectiveness. A case study using Task Analysis (TA) based methodology for assessment and reduction of human errors in the Air and Missile Defense System in the context of emerging technologies were presented. Action-oriented task analysis techniques such as Hierarchical Task Analysis (HTA) and Operator Action Event Tree (OAET) along with Critical Action and Decision Event Tree (CADET) for cognitive task analysis was used. Human factors assessment based on the task analysis helps in realizing safe and reliable defense systems. These techniques helped in the identification of human errors during different phases of Air and Missile Defence operations, leading to meet the requirement of a safe, reliable and cost-effective mission.

Keywords: defence systems, reliability, risk, safety

Procedia PDF Downloads 137
2143 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age

Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner

Abstract:

Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.

Keywords: teens, sexting, gender, sexual politics

Procedia PDF Downloads 238
2142 Poverty Reduction in European Cities: Local Governments’ Strategies and Programmes to Reduce Poverty; Interview Results from Austria

Authors: Melanie Schinnerl, Dorothea Greiling

Abstract:

In the context of the 2020 strategy, poverty and its fight returned to the center of national political efforts. This served as motivation for an Austrian research grant-funded project to focus on the under-researched local government level with the aim to identify municipal best-practice cases and to derive policy implications for Austria. Designing effective poverty reduction strategies is a complex challenge which calls for an integrated multi-actor in approach. Cities are increasingly confronted to combat poverty, even in rich EU-member states. By doing so cities face substantial demographic, cultural, economic and social challenges as well as changing welfare state regimes. Furthermore, there is a low willingness of (right-wing) governments to support the poor. Against this background, the research questions are: 1. How do local governments define poverty? 2. Who are the main risk groups and what are the most pressing problems when fighting urban poverty? 3. What is regarded as successful anti-poverty initiatives? 4. What is the underlying welfare state concept? To address the research questions a multi-method approach was chosen, consisting of a systematic literature analysis, a comprehensive document analysis, and expert interviews. For interpreting the data the project follows the qualitative-interpretive paradigm. Municipal approaches for reducing poverty are compared based on deductive, as well as inductive identified criteria. In addition to an intensive literature analysis, interviews (40) were conducted in Austria since the project started in March 2018. From the other countries, 14 responses have been collected, providing a first insight. Regarding the definition of poverty the EU SILC-definition as well as counting the persons who receive need-based minimum social benefits, the Austrian form of social welfare, are the predominant approaches in Austria. In addition to homeless people, single-parent families, un-skilled persons, long-term unemployed persons, migrants (first and second generation), refugees and families with at least 3 children were frequently mentioned. The most pressing challenges for Austrian cities are: expected reductions of social budgets, a great insecurity of the central government's social policy reform plans, the growing number of homeless people and a lack of affordable housing. Together with affordable housing, old-age poverty will gain more importance in the future. The Austrian best practice examples, suggested by interviewees, focused primarily on homeless, children and young people (till 25). Central government’s policy changes have already negative effects on programs for refugees and elderly unemployed. Social Housing in Vienna was frequently mentioned as an international best practice case, other growing cities can learn from. The results from Austria indicate a change towards the social investment state, which primarily focuses on children and labour market integration. The first insights from the other countries indicate that affordable housing and labor market integration are cross-cutting issues. Inherited poverty and old-age poverty seems to be more pressing outside Austria.

Keywords: anti-poverty policies, European cities, empirical study, social investment

Procedia PDF Downloads 120
2141 Changing Emphases in Mental Health Research Methodology: Opportunities for Occupational Therapy

Authors: Jeffrey Chase

Abstract:

Historically the profession of Occupational Therapy was closely tied to the treatment of those suffering from mental illness; more recently, and especially in the U.S., the percentage of OTs identifying as working in the mental health area has declined significantly despite the estimate that by 2020 behavioral health disorders will surpass physical illnesses as the major cause of disability worldwide. In the U.S. less than 10% of OTs identify themselves as working with the mentally ill and/or practicing in mental health settings. Such a decline has implications for both those suffering from mental illness and the profession of Occupational Therapy. One reason cited for the decline of OT in mental health has been the limited research in the discipline addressing mental health practice. Despite significant advances in technology and growth in the field of neuroscience, major institutions and funding sources such as the National Institute of Mental Health (NIMH) have noted that research into the etiology and treatment of mental illness have met with limited success over the past 25 years. One major reason posited by NIMH is that research has been limited by how we classify individuals, that being mostly on what is observable. A new classification system being developed by NIMH, the Research Domain Criteria (RDoc), has the goal to look beyond just descriptors of disorders for common neural, genetic, and physiological characteristics that cut across multiple supposedly separate disorders. The hope is that by classifying individuals along RDoC measures that both reliability and validity will improve resulting in greater advances in the field. As a result of this change NIH and NIMH will prioritize research funding to those projects using the RDoC model. Multiple disciplines across many different setting will be required for RDoC or similar classification systems to be developed. During this shift in research methodology OT has an opportunity to reassert itself into the research and treatment of mental illness, both in developing new ways to more validly classify individuals, and to document the legitimacy of previously ill-defined and validated disorders such as sensory integration.

Keywords: global mental health and neuroscience, research opportunities for ot, greater integration of ot in mental health research, research and funding opportunities, research domain criteria (rdoc)

Procedia PDF Downloads 277
2140 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi

Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev

Abstract:

Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).

Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy

Procedia PDF Downloads 324
2139 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 318
2138 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 134
2137 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories

Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy

Abstract:

Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.

Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography

Procedia PDF Downloads 191
2136 Performance and Damage Detection of Composite Structural Insulated Panels Subjected to Shock Wave Loading

Authors: Anupoju Rajeev, Joanne Mathew, Amit Shelke

Abstract:

In the current study, a new type of Composite Structural Insulated Panels (CSIPs) is developed and investigated its performance against shock loading which can replace the conventional wooden structural materials. The CSIPs is made of Fibre Cement Board (FCB)/aluminum as the facesheet and the expanded polystyrene foam as the core material. As tornadoes are very often in the western countries, it is suggestable to monitor the health of the CSIPs during its lifetime. So, the composite structure is installed with three smart sensors located randomly at definite locations. Each smart sensor is fabricated with an embedded half stainless phononic crystal sensor attached to both ends of the nylon shaft that can resist the shock and impact on facesheet as well as polystyrene foam core and safeguards the system. In addition to the granular crystal sensors, the accelerometers are used in the horizontal spanning and vertical spanning with a definite offset distance. To estimate the health and damage of the CSIP panel using granular crystal sensor, shock wave loading experiments are conducted. During the experiments, the time of flight response from the granular sensors is measured. The main objective of conducting shock wave loading experiments on the CSIP panels is to study the effect and the sustaining capacity of the CSIP panels in the extreme hazardous situations like tornados and hurricanes which are very common in western countries. The effects have been replicated using a shock tube, an instrument that can be used to create the same wind and pressure intensity of tornado for the experimental study. Numerous experiments have been conducted to investigate the flexural strength of the CSIP. Furthermore, the study includes the damage detection using three smart sensors embedded in the CSIPs during the shock wave loading.

Keywords: composite structural insulated panels, damage detection, flexural strength, sandwich structures, shock wave loading

Procedia PDF Downloads 147
2135 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background

Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong

Abstract:

Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.

Keywords: deep learning, image fusion, image generation, layout analysis

Procedia PDF Downloads 160
2134 Optimizing the Design Parameters of Acoustic Power Transfer Model to Achieve High Power Intensity and Compact System

Authors: Ariba Siddiqui, Amber Khan

Abstract:

The need for bio-implantable devices in the field of medical sciences has been increasing day by day; however, the charging of these devices is a major issue. Batteries, a very common method of powering the implants, have a limited lifetime and bulky nature. Therefore, as a replacement of batteries, acoustic power transfer (APT) technology is being accepted as the most suitable technique to wirelessly power the medical implants in the present scenario. The basic model of APT consists of piezoelectric transducers that work on the principle of converse piezoelectric effect at the transmitting end and direct piezoelectric effect at the receiving end. This paper provides mechanistic insight into the parameters affecting the design and efficient working of acoustic power transfer systems. The optimum design considerations have been presented that will help to compress the size of the device and augment the intensity of the pressure wave. A COMSOL model of the PZT (Lead Zirconate Titanate) transducer was developed. The model was simulated and analyzed on a frequency spectrum. The simulation results displayed that the efficiency of these devices is strongly dependent on the frequency of operation, and a wrong choice of the operating frequency leads to the high absorption of acoustic field inside the tissue (medium), poor power strength, and heavy transducers, which in effect influence the overall configuration of the acoustic systems. Considering all the tradeoffs, the simulations were performed again by determining an optimum frequency (900 kHz) that resulted in the reduction of the transducer's thickness to 1.96 mm and augmented the power strength with an intensity of 432 W/m². Thus, the results obtained after the second simulation contribute to lesser attenuation, lightweight systems, high power intensity, and also comply with safety limits provided by the U.S Food and Drug Administration (FDA). It was also found that the chosen operating frequency enhances the directivity of the acoustic wave at the receiver side.

Keywords: acoustic power, bio-implantable, COMSOL, Lead Zirconate Titanate, piezoelectric, transducer

Procedia PDF Downloads 175
2133 Evaluation of Organizational Culture and Its Effects on Innovation in the IT Sector: A Case Study from UAE

Authors: Amir M. Shikhli, Refaat H. Abdel-Razek, Salaheddine Bendak

Abstract:

Innovation is considered to be one of the key factors that influence long-term success of any company. The problem of many organizations in developing countries is trying to implement innovation without a strong basis within the organizational culture to support it. The objective of this study is to assess the effects of organizational culture on innovation in one of the biggest information technology organizations in UAE, Injazat Data System. First, an Organizational Culture Assessment Instrument (OCAI) was used as a survey and Competing Value Framework as a model to analyze the existing culture within the organization and determine its characteristics. Following that, a modified version of the Community Innovation Survey (CIS) was used to determine innovation types introduced by the organization. Then multiple linear regression analysis was used to find out the effects of existing organizational culture on innovation. Results show that existing organizational culture is composed of a combination of Hierarchy (29.4%), Clan (25.8%), Market (24.9%) and Adhocracy (19.9%). Results of the second survey show that the organization focuses on organizational innovation (26.8%) followed by market and product innovations (25.6%) and finally process innovation (22.0%). Regression analysis results reveal that for each innovation type there is a recommended combination of the four culture types. For product innovation, the combination is 47.4% Clan, 17.9% Adhocracy, 1.0% Market and 33.3% Hierarchy; for process innovation it is 19.7% Clan, 45.2% Adhocracy, 32.0% Market and 3.1% Hierarchy; for organizational innovation the combination is 5.4% Clan, 32.7% Adhocracy, 6.0% Market and 55.9% Hierarchy; and for market innovation it is 25.5% Clan, 42.6% Adhocracy, 32.6% Market and 8.4% Hierarchy. Based on these recommended combinations, this study suggests two ways to enhance the innovation culture in the organization. First, if the management decides on the innovation type to be enhanced, a comparison between the existing culture and the recommended combination of selected innovation types will lead to difference in percentages of each culture type. Then further analysis should show how to modify the existing culture to match the recommended combination. Second, if the innovation type is not selected, but the management wants to enhance innovation culture in the organization, the difference in percentages of each culture type will lead to finding out the recommended combination of culture types that gives the narrowest gap between existing culture and recommended combination.

Keywords: developing countries, organizational culture, innovation types, product innovation, process innovation, organizational innovation, marketing innovation

Procedia PDF Downloads 275
2132 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation

Authors: Alaa Hamed Salama, Rehab Nabil Shamma

Abstract:

Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.

Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization

Procedia PDF Downloads 450
2131 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System

Authors: Imran Dayan, Ashiqul Khan

Abstract:

Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.

Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining

Procedia PDF Downloads 337
2130 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 117
2129 Spatiotemporal Variability of Snow Cover and Snow Water Equivalent over Eurasia

Authors: Yinsheng Zhang

Abstract:

Changes in the extent and amount of snow cover in Eurasia are of great interest because of their vital impacts on the global climate system and regional water resource management. This study investigated the spatial and temporal variability of the snow cover extent (SCE) and snow water equivalent (SWE) of continental Eurasia using the Northern Hemisphere Equal-Area Scalable Earth Grid (EASE-Grid) Weekly SCE data for 1972–2006 and the Global Monthly EASE-Grid SWE data for 1979–2004. The results indicated that, in general, the spatial extent of snow cover significantly decreased during spring and summer, but varied little during autumn and winter over Eurasia in the study period. The date at which snow cover began to disappear in spring has significantly advanced, whereas the timing of snow cover onset in autumn did not vary significantly during 1972–2006. The snow cover persistence period declined significantly in the western Tibetan Plateau as well as the partial area of Central Asia and northwestern Russia but varied little in other parts of Eurasia. ‘Snow-free breaks’ (SFBs) with intermittent snow cover in the cold season were mainly observed in the Tibetan Plateau and Central Asia, causing a low sensitivity of snow cover persistence period to the timings of snow cover onset and disappearance over the areas with shallow snow. The averaged SFBs were 1–14 weeks in the Tibetan Plateau during 1972–2006 and the maximum intermittence could reach 25 weeks in some extreme years. At a seasonal scale, the SWE usually peaked in February or March but fell gradually since April across Eurasia. Both annual mean and annual maximum SWE decreased significantly during 1979–2004 in most parts of Eurasia except for eastern Siberia as well as northwestern and northeastern China.

Keywords: Eurasia, snow cover extent, snow cover persistence period, snow-free breaks, onset and disappearance timings, snow water equivalent

Procedia PDF Downloads 148
2128 Effect of Oil Viscosity and Brine Salinity/Viscosity on Water/Oil Relative Permeability and Residual Saturations

Authors: Sami Aboujafar

Abstract:

Oil recovery in petroleum reservoirs is greatly affected by fluid-rock and fluid-fluid interactions. These interactions directly control rock wettability, capillary pressure and relative permeability curves. Laboratory core-floods and centrifuge experiments were conducted on sandstone and carbonate cores to study the effect of low and high brine salinity and viscosity and oil viscosity on residual saturations and relative permeability. Drainage and imbibition relative permeability in two phase system were measured, refined lab oils with different viscosities, heavy and light, and several brine salinities were used. Sensitivity analysis with different values for the salinity and viscosity of the fluids,, oil and water, were done to investigate the effect of these properties on water/oil relative permeability, residual oil saturation and oil recovery. Experiments were conducted on core material from viscous/heavy and light oil fields. History matching core flood simulator was used to study how the relative permeability curves and end point saturations were affected by different fluid properties using several correlations. Results were compared with field data and literature data. The results indicate that there is a correlation between the oil viscosity and/or brine salinity and residual oil saturation and water relative permeability end point. Increasing oil viscosity reduces the Krw@Sor and increases Sor. The remaining oil saturation from laboratory measurements might be too high due to experimental procedures, capillary end effect and early termination of the experiment, especially when using heavy/viscous oil. Similarly the Krw@Sor may be too low. The effect of wettability on the observed results is also discussed. A consistent relationship has been drawn between the fluid parameters, water/oil relative permeability and residual saturations, and a descriptor may be derived to define different flow behaviors. The results of this work will have application to producing fields and the methodologies developed could have wider application to sandstone and carbonate reservoirs worldwide.

Keywords: history matching core flood simulator, oil recovery, relative permeability, residual saturations

Procedia PDF Downloads 339
2127 Synthesis of Highly Stable Multi-Functional Iron Oxide Nanoparticles for Active Mitochondrial Targeting in Immunotherapy

Authors: Masome Moeni, Roya Abedizadeh, Elham Aram, Hamid Sadeghi-Abandansari, Davood Sabour, Robert Menzel, Ali Hassanpour

Abstract:

Mitochondria- targeting immunogenic cell death inducers (MT-ICD) have been designed to trigger intrinsic apoptosis signalling pathway in malignant cells and revive the antitumour immune system. MT-ICD inducers have considered to be non-specific, which can deteriorate the ability to initiate mitochondria-selective oxidative stress, causing high toxicity. Iron oxide nanoparticles (IONPs) can be an ideal candidate as vehicles for utilizing in immunotherapy due to their biocompatibility, modifiable surface chemistry, magnetic characteristics and multi-functional applications in single platform. These types of NPs can facilitate a real time imaging which can provide an effective strategy to analyse pharmacokinetic parameters of nano-formula, including blood circulation time, targeted and controlled release at tumour microenvironment. To our knowledge, the conjugation of IONPs with MT-ICD and oxaliplatin (a chemotherapeutic agent used for the treatment of colorectal cancer) for immunotherapy have not been investigated. Herein, IONPs were generated via co-precipitation reaction at high temperatures, followed by coating the colloidal suspension with tetraethyl orthosilicate and 3-aminopropyltriethoxysilane to optimize their bio-compatibility, preventing aggregation and maintaining stability at physiological pH, then functionalized with (3-carboxypropyl) triphenyl phosphonium bromide for mitochondrial delivery. Analytical results demonstrated the successful process of IONPs functionalization. In particular, the colloidal particles of doped IONPs exhibited an excellent stability and dispersibility. The resultant particles were also successfully loaded with the oxaliplatin for an active mitochondrial targeting in immunotherapy, resulting in well-maintained super-paramagnetic characteristics and stable structure of the functionalized IONPs with nanoscale particle sizes.

Keywords: Immunotherapy, mitochondria, cancer, iron oxide nanoparticle

Procedia PDF Downloads 76
2126 Validation Study of Radial Aircraft Engine Model

Authors: Lukasz Grabowski, Tytus Tulwin, Michal Geca, P. Karpinski

Abstract:

This paper presents the radial aircraft engine model which has been created in AVL Boost software. This model is a one-dimensional physical model of the engine, which enables us to investigate the impact of an ignition system design on engine performance (power, torque, fuel consumption). In addition, this model allows research under variable environmental conditions to reflect varied flight conditions (altitude, humidity, cruising speed). Before the simulation research the identifying parameters and validating of model were studied. In order to verify the feasibility to take off power of gasoline radial aircraft engine model, some validation study was carried out. The first stage of the identification was completed with reference to the technical documentation provided by manufacturer of engine and the experiments on the test stand of the real engine. The second stage involved a comparison of simulation results with the results of the engine stand tests performed on a WSK ’PZL-Kalisz’. The engine was loaded by a propeller in a special test bench. Identifying the model parameters referred to a comparison of the test results to the simulation in terms of: pressure behind the throttles, pressure in the inlet pipe, and time course for pressure in the first inlet pipe, power, and specific fuel consumption. Accordingly, the required coefficients and error of simulation calculation relative to the real-object experiments were determined. Obtained the time course for pressure and its value is compatible with the experimental results. Additionally the engine power and specific fuel consumption tends to be significantly compatible with the bench tests. The mapping error does not exceed 1.5%, which verifies positively the model of combustion and allows us to predict engine performance if the process of combustion will be modified. The next conducted tests verified completely model. The maximum mapping error for the pressure behind the throttles and the inlet pipe pressure is 4 %, which proves the model of the inlet duct in the engine with the charging compressor to be correct.

Keywords: 1D-model, aircraft engine, performance, validation

Procedia PDF Downloads 337