Search results for: medical software tools
2340 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India
Authors: Kulin Dave, Kapil Mohan
Abstract:
Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.Keywords: DEEPSOIL v 7.0, ground response analysis, pressure-dependent modified Kodner Zelasko model, MKZ model, response spectra, shear wave velocity
Procedia PDF Downloads 1382339 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.Keywords: compass error, GPS, maritime navigation, mobile augmented reality
Procedia PDF Downloads 3352338 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 1482337 Improving the Technology of Assembly by Use of Computer Calculations
Authors: Mariya V. Yanyukina, Michael A. Bolotov
Abstract:
Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.Keywords: accuracy, assembly, interacting dimension chains, turbine
Procedia PDF Downloads 3752336 A Comprehensive Review of Electronic Health Records Implementation in Healthcare
Authors: Lateefat Amao, Misagh Faezipour
Abstract:
Implementing electronic health records (EHR) in healthcare is a pivotal transition aimed at digitizing and optimizing patient health information management. The expectations associated with this transition are high, even towards other health information systems (HIS) and health technology. This multifaceted process involves careful planning and execution to improve the quality and efficiency of patient care, especially as healthcare technology is a sensitive niche. Key considerations include a thorough needs assessment, judicious vendor selection, robust infrastructure development, and training and adaptation of healthcare professionals. Comprehensive training programs, data migration from legacy systems and models, interoperability, as well as security and regulatory compliance are imperative for healthcare staff to navigate EHR systems adeptly. The purpose of this work is to offer a comprehensive review of the literature on EHR implementation. It explores the impact of this health technology on health practices, highlights challenges and barriers to its successful utility, and offers practical strategies that can impact its success in healthcare. This paper provides a thorough review of studies on the adoption of EHRs, emphasizing the wide range of experiences and results connected to EHR use in the medical field, especially across different types of healthcare organizations.Keywords: healthcare, electronic health records, EHR implementation, patient care, interoperability
Procedia PDF Downloads 862335 Changing the Landscape of Fungal Genomics: New Trends
Authors: Igor V. Grigoriev
Abstract:
Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics
Procedia PDF Downloads 2112334 Relationship Demise After Having Children: An Analysis of Abandonment and Nuclear Family Structure vs. Supportive Community Cultures
Authors: John W. Travis
Abstract:
There is an epidemic of couples separating after a child is born into a family, generally with the father leaving emotionally or physically in the first few years after birth. This separation creates high levels of stress for both parents, especially the primary parent, leaving her (or him) less available to the infant for healthy attachment and nurturing. The deterioration of the couple’s bond leaves parents increasingly under-resourced, and the dependent child in a compromised environment, with an increased likelihood of developing an attachment disorder. Objectives: To understand the dynamics of a couple, once the additional and extensive demands of a newborn are added to a nuclear family structure, and to identify effective ways to support all members of the family to thrive. Qualitative studies interviewed men, women, and couples after pregnancy and the early years as a family, regarding key destructive factors, as well as effective tools for the couple to retain a strong bond. In-depth analysis of a few cases, including the author’s own experience, reveal deeper insights about subtle factors, replicated in wider studies. Using a self-assessment survey, many fathers report feeling abandoned, due to the close bond of the mother-baby unit, and in turn, withdrawing themselves, leaving the mother without support and closeness to resource her for the baby. Fathers report various types of abandonment, from his partner to his mother, with whom he did not experience adequate connection as a child. The study identified a key destructive factor to be unrecognized wounding from childhood that was carried into the relationship. The study culminated in the naming of Male Postpartum Abandonment Syndrome (MPAS), describing the epidemic in industrialized cultures with the nuclear family as the primary configuration. A growing family system often collapses without a minimum number of adult caregivers per infant, approximately four per infant (3.87), which allows for proper healing and caretaking. In cases with no additional family or community beyond one or two parents, the layers of abandonment and trauma result in the deterioration of a couple’s relationship and ultimately the family structure. The solution includes engaging community in support of new families. The study identified (and recommends) specific resources to assist couples in recognizing and healing trauma and disconnection at multiple levels. Recommendations include wider awareness and availability of resources for healing childhood wounds and greater community-building efforts to support couples for the whole family to thrive.Keywords: abandonment, attachment, community building, family and marital functioning, healing childhood wounds, infant wellness, intimacy, marital satisfaction, relationship quality, relationship satisfaction
Procedia PDF Downloads 2302333 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling
Authors: Moustafa Osman Mohammed
Abstract:
The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology
Procedia PDF Downloads 732332 Estimation Cytokines IL-2, IL-4, IL-8 in Serum and Nasal Secretions of Patients with Various Forms of Chronic Polypoid Rhinosinusitis
Authors: U. N. Vokhidov, U. S. Khasanov, A. A. Ismailova
Abstract:
Background: Currently, the researches on the development of chronic polypoid rhinosinusitis cytokines play a major role. The aim of this study was the comparison of indicators IL-2, IL-4, IL-8 in the peripheral blood and nasal secretions of patients with various forms of chronic polypoid rhinosinusitis. Material and methods: We studied 50 patients with chronic polypoid rhinosinusitis receiving hospital treatment in the ENT department of the 3-rd clinic of Tashkent Medical Academy. It was carried out a comprehensive study including morphological examination, immunological study of blood and nasal secretions on the IL-2, IL-4 and IL-8. Results: The results of immunological studies of peripheral blood showed that patients with ‘eosinophilic’ polyps were increased IL-2 and IL-4 in patients with ‘neutrophils’ polyps were increased IL-2 and IL-8. Immunological investigation nasal secretions taken from patients with nasal polyposis rhinosinusitis showed that patients with ‘eosinophilic’ polyps also increased IL- 2 and IL- 4 in patients with ‘neutrophils’ polyps - increased IL-2 and IL-8. Conclusion: In patients with ‘eosinophilic’ polyps revealed the presence of immunity to the allergy of the body, patients with ‘neutrophilic’ polyps identified immunity to the presence of inflammation, it is necessary to take into account the doctor-otolaryngologist when choosing a treatment strategy for the prevention of recurrence of the disease.Keywords: chronic polypoid rhinosinusitis, immunology, cytikines, nasal secretion
Procedia PDF Downloads 2232331 The Effectiveness of a Self-Efficacy Psychoeducational Programme to Enhance Outcomes of Patients with End-Stage Renal Disease
Authors: H. C. Chen, S. W. C. Chan, K. Cheng, A. Vathsala, H. K. Sran, H. He
Abstract:
Background: End-stage renal disease (ESRD) is the last stage of chronic kidney disease. The numbers of patients with ESRD have increased worldwide due to the growing number of aging, diabetes and hypertension populations. Patients with ESRD suffer from physical illness and psychological distress due to complex treatment regimens, which often affect the patients’ social and psychological functioning. As a result, the patients may fail to perform daily self-care and self-management, and consequently experience worsening conditions. Aims: The study aims to examine the effectiveness of a self-efficacy psychoeducational programme on primary outcome (self-efficacy) and secondary outcomes (psychological wellbeing, treatment adherence, and quality of life) in patients with ESRD and haemodialysis in Singapore. Methodology: A randomised controlled, two-group pretest and repeated posttests design will be carried out. A total of 154 participants (n=154) will be recruited. The participants in the control group will receive a routine treatment. The participants in the intervention group will receive a self-efficacy psychoeducational programme in addition to the routine treatment. The programme is a two-session of educational intervention in a week. A booklet, two consecutive sessions of face-to-face individual education, and an abdominal breathing exercise are adopted in the programme. Outcome measurements include Dialysis Specific Self-efficacy Scale, Kidney Disease Quality of Life- 36 Hospital Anxiety and Depression Scale, Renal Adherence Attitudes Questionnaire and Renal Adherence Behaviour Questionnaire. The questionnaires will be used to measure at baseline, 1- and 3- and 6-month follow-up periods. Process evaluation will be conducted with a semi-structured face to face interview. Quantitative data will be analysed using SPSS21.0 software. Qualitative data will be analysed by content analysis. Significance of the study: This study will identify a clinically useful and potentially effective approach to help patients with end-stage renal disease and haemodialysis by enhancing their self-efficacy in self-care behaviour, and therefore improving their psychological well-being, treatment adherence and quality of life. This study will provide information to develop clinical guidelines to improve patients’ disease self-management and to enhance health-related outcomes and it will help reducing disease burden.Keywords: end-stage renal disease (ESRD), haemodialysis, psychoeducation, self-efficacy
Procedia PDF Downloads 3262330 Effects of the Gratitude Program on the Gratitude, Well-Being, Perceived Stress, and Stress Coping of Nurses
Authors: Yu H. Chen, Li C. Chen, Hsiang Y. Wu, Wan Y. Chen, Yin S. Lai, Sarah S. Chen
Abstract:
Little has been done to customize an appropriate program on gratitude for nurses, who work in high-stress environments. The purpose of this study is to design an appropriate program on gratitude for nurses and to investigate the effects of the program. Based on research done by Kaohsiung Medical University’s Positive Psychology Center, the only one of its kind in Taiwan, one of the top five strengths of nurses is gratitude. Instead of adapting from an older model created from past research, the Gratitude Workshop is developed from a quasi-experimental approach and designed with five additional dimensions that emphasize gratitude: thanking others, thanking one's surroundings, cherishing what one has, appreciating hardships, and appreciating the present. A sample of 84 nurses was randomly selected from the Kaohsiung Municipal Ta-Tung Hospital; 43 of who participated in the nine-hour Gratitude Workshop that spanned over three weeks, while the other 41 were part of the waitlist control group. The pretest and posttest included five questionnaires: Inventory of Undergraduates' Gratitude, The Gratitude Questionnaire-6, Mental Health Continuum‐Short Form, Perceived Stress Scale, and the Stress Coping Strategies Questionnaire. Results of the research showed that the Gratitude Workshop elevates gratitude, well-being, and perceived stress on the nurses; however, it was also found in the Stress Coping Strategies Questionnaire that the Gratitude Workshop only heightened the regulation of emotions.Keywords: gratitude, nurses, positive psychology, well-being
Procedia PDF Downloads 3902329 A Systematic Review on The Usage of CRISPR-Cas System in The Treatment of Osteoarthritis(OA)
Authors: Atiqah Binti Ab Aziz
Abstract:
Background: It has been estimated that about 250 million people all over the world suffer from osteoarthritis (OA). Thus, OA is a major health problem in urgent need of better treatment. Problem statement: Current therapies for OA can temporarily relieve clinical symptoms and for pain management, rather than preventing or curing OA. Total knee replacement performed at the end stage of the disease is considered the only cure available. Objectives: This article aimed to explore the potential of treating osteoarthritis via the CRISPR Cas system. Methods: Articles that relate to the application of the CRISPR Cas system in osteoarthritis were extracted, categorized, and reviewed through the PRISMA method using PubMed, an engine published from November 2016 to November 2021. Results: There were 30 articles screened. Articles that fall under the categories of non-English articles, full articles that were not available, articles that were not an original articles were excluded. Ultimately, 13 articles were reviewed. Discussion: This review provides an information on the introduction of CRISPR and discussed on their mechanism of actions in extracted studies for OA treatment. Conclusions: It can be seen that not many medical research utilize the CRISPR Cas system as part of the method in the treatment of OA. Hence exploring the extent of the usage of the CRISPR Cas system in OA treatment is important to determine the research gap and point out at which of the research is needed further investigation to avoid redundancy of existing research and ensure the novelty of the research.Keywords: osteoarthritis, treatment, CRISPR, review, therapy
Procedia PDF Downloads 1762328 Closed Mitral Valvotomy: A Safe and Promising Procedure
Authors: Sushil Kumar Singh, Kumar Rahul, Vivek Tewarson, Sarvesh Kumar, Shobhit Kumar
Abstract:
Objective: Rheumatic mitral stenosis continues to be a major public health problem in developing countries. When the left atrium (LA) is unable to fill the left ventricle (LV) at normal LA pressures due to impaired relaxation and impaired compliance, diastolic dysfunction occurs. The assessment of left ventricular (LV) diastolic function and filling pressures is of clinical importance to identify underlying cardiac disease, its treatment, and to assess prognosis. 2D echocardiography can detect diastolic dysfunction with excellent sensitivity and minimal risk when compared to the gold standard of invasive pressure-volume measurements. Material and Method: This was a one-year study consisting of twenty-nine patients of isolated rheumatic severe mitral stenosis. Data was analyzed preoperative and post operative (at one month follow-up). Transthoracic 2D echocardiographic parameters of the diastolic function are transmitral flow, pulmonary venous flow, mitral annular tissue doppler, and color M-mode doppler. In our study, mitral valve orifice area, ejection fraction, deceleration time, E/A-wave, E/E’-wave, myocardial performance index of left ventricle (Tei index ), and Mitral inflow propagation velocity were included for echocardiographic evaluation. The statistical analysis was performed on SPSS Version 15.0 statistical analysis software. Result: Twenty-nine patients underwent successful closed mitral commissurotomy for isolated mitral stenosis. The outcome measures were observed pre-operatively and at one-month follow-up. The majority of patients were in NYHA grade III (69.0%) in the preoperative period, which improved to NYHA grade I (48.3%) after closed mitral commissurotomy. Post-surgery mitral valve area increased from 0.77 ± 0.13 to 2.32 ± 0.26 cm, ejection fraction increased from 61.38 ± 4.61 to 64.79 ± 3.22. There was a decrease in deceleration time from 231.55 ± 49.31 to 168.28 ± 14.30 ms, E/A ratio from 1.70 ± 0.54 from 0.89 ± 0.39, E/E’ ratio from 14.59 ± 3.34 to 8.86 ± 3.03. In addition, there was improvement in TIE index from 0.50 ± 0.03 to 0.39 ± 0.06 and mitral inflow propagation velocity from 47.28 ± 3.71 to 57.86 ± 3.19 cm/sec. In peri-operative and follow-up, there was no incidence of severe mitral regurgitation (MR). There was no thromboembolic incident and no mortality.Keywords: closed mitral valvotomy, mitral stenosis, open mitral commissurotomy, balloon mitral valvotomy
Procedia PDF Downloads 902327 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine
Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez
Abstract:
An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.Keywords: blade, dynamic, fsi, wind turbine
Procedia PDF Downloads 4862326 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components
Authors: Najeh Lakhoua
Abstract:
Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture
Procedia PDF Downloads 2092325 Simulation Research of Diesel Aircraft Engine
Authors: Łukasz Grabowski, Michał Gęca, Mirosław Wendeker
Abstract:
This paper presents the simulation results of a new opposed piston diesel engine to power a light aircraft. Created in the AVL Boost, the model covers the entire charge passage, from the inlet up to the outlet. The model shows fuel injection into cylinders and combustion in cylinders. The calculation uses the module for two-stroke engines. The model was created using sub-models available in this software that structure the model. Each of the sub-models is complemented with parameters in line with the design premise. Since engine weight resulting from geometric dimensions is fundamental in aircraft engines, two configurations of stroke were studied. For each of the values, there were calculated selected operating conditions defined by crankshaft speed. The required power was achieved by changing air fuel ratio (AFR). There was also studied brake specific fuel consumption (BSFC). For stroke S1, the BSFC was lowest at all of the three operating points. This difference is approximately 1-2%, which means higher overall engine efficiency but the amount of fuel injected into cylinders is larger by several mg for S1. The cylinder maximum pressure is lower for S2 due to the fact that compressor gear driving remained the same and boost pressure was identical in the both cases. Calculations for various values of boost pressure were the next stage of the study. In each of the calculation case, the amount of fuel was changed to achieve the required engine power. In the former case, the intake system dimensions were modified, i.e. the duct connecting the compressor and the air cooler, so its diameter D = 40 mm was equal to the diameter of the compressor outlet duct. The impact of duct length was also examined to be able to reduce the flow pulsation during the operating cycle. For the so selected geometry of the intake system, there were calculations for various values of boost pressure. The boost pressure was changed by modifying the gear driving the compressor. To reach the required level of cruising power N = 68 kW. Due to the mechanical power consumed by the compressor, high pressure ratio results in a worsened overall engine efficiency. The figure on the change in BSFC from 210 g/kWh to nearly 270 g/kWh shows this correlation and the overall engine efficiency is reduced by about 8%. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: aircraft, diesel, engine, simulation
Procedia PDF Downloads 2102324 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model
Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles
Abstract:
The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite
Procedia PDF Downloads 2332323 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network
Authors: Yuntao Liu, Lei Wang, Haoran Xia
Abstract:
Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability
Procedia PDF Downloads 802322 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 4932321 Wind Energy Resources Assessment and Micrositting on Different Areas of Libya: The Case Study in Darnah
Authors: F. Ahwide, Y. Bouker, K. Hatem
Abstract:
This paper presents long term wind data analysis in terms of annual and diurnal variations at different areas of Libya. The data of the wind speed and direction are taken each ten minutes for a period, at least two years, are used in the analysis. ‘WindPRO’ software and Excel workbook were used for the wind statistics and energy calculations. As for Derna, average speeds are 10 m, 20 m, and 40 m, and respectively 6.57 m/s, 7.18 m/s, and 8.09 m/s. Highest wind speeds are observed at SSW, followed by S, WNW and NW sectors. Lowest wind speeds are observed between N and E sectors. Most frequent wind directions are NW and NNW. Hence, wind turbines can be installed against these directions. The most powerful sector is NW (29.4 % of total expected wind energy), followed by 19.9 % SSW, 11.9% NNW, 8.6% WNW and 8.2% S. Furthermore in Al-Maqrun: the most powerful sector is W (26.8 % of total expected wind energy), followed by 12.3 % WSW and 9.5% WNW. While in Goterria: the most powerful sector is S (14.8 % of total expected wind energy), followed by SSE, SE, and WSW. And Misalatha: the most powerful sector is S, by far represents 28.5% of the expected power, followed by SSE and SE. As for Tarhuna, it is by far SSE and SE, representing each one two times the expected energy of the third powerful sector (NW). In Al-Asaaba: it is SSE by far represents 50% of the expected power, followed by S. It can to be noted that the high frequency of the south direction winds, that come from the desert could cause a high frequency of dust episodes. This fact then, should be taken into account in order to take appropriate measures to prevent wind turbine deterioration. In Excel workbook, an estimation of annual energy yield at position of Derna, Al-Maqrun, Tarhuna, and Al-Asaaba meteorological mast has been done, considering a generic wind turbine of 1.65 MW. (mtORRES, TWT 82-1.65MW) in position of meteorological mast. Three other turbines have been tested. At 80 m, the estimation of energy yield for Derna, Al-Maqrun, Tarhuna, and Asaaba is 6.78 GWh or 3390 equivalent hours, 5.80 GWh or 2900 equivalent hours, 4.91 GWh or 2454 equivalent hours and 5.08 GWh or 2541 equivalent hours respectively. It seems a fair value in the context of a possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Furthermore, an estimation of annual energy yield at positions of Misalatha, Azizyah and Goterria meteorological mast has been done, considering a generic wind turbine of 2 MW. We found that, at 80 m, the estimation of energy yield is 3.12 GWh or 1557 equivalent hours, 4.47 GWh or 2235 equivalent hours and 4.07GWh or 2033 respectively . It seems a very poor value in the context of possible development of a wind energy project in the areas, considering a value of 2400 equivalent hours as an approximate limit to consider a wind warm economically profitable. Anyway, more data and a detailed wind farm study would be necessary to draw conclusions.Keywords: wind turbines, wind data, energy yield, micrositting
Procedia PDF Downloads 1912320 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data
Authors: Imran Aziz Tunio
Abstract:
Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES
Procedia PDF Downloads 1132319 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 1122318 A Geometric Based Hybrid Approach for Facial Feature Localization
Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik
Abstract:
Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.Keywords: biometrics, face recognition, facial landmarks, image processing
Procedia PDF Downloads 4162317 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression
Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld
Abstract:
Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.Keywords: depression, spatial cognition, spatial imagery, social panorama
Procedia PDF Downloads 1732316 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards
Authors: Nadezhda Kvatashidze, Elena Kharabadze
Abstract:
It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value
Procedia PDF Downloads 3232315 Lab-on-Chip Multiplexed qPCR Analysis Utilizing Melting Curve Analysis Detects Up to 144 Alleles with Sub-hour Turn-around Time
Authors: Jeremy Woods, Fanqing Chen
Abstract:
Rapid genome testing can provide results in at best hours to days, though there are certain clinical decisions that could be guided by genetic test results that need results in hours to minutes. As such, methods of genetic Point of Care Testing (POCT) are required if genetic data is to guide management in illnesses in a wide variety of critical and emergent medical situations such as neonatal sepsis, chemotherapy administration in endometrial cancer, and glucose-6-phosphate dehydrogenase deficiency (G6PD)-associated neonatal hyperbilirubinemia. As such, we developed a POCT “lab-on-chip” technology capable of identifying up to 144 alleles in under an hour. This test required no specialized training to utilize and is suitable to deployment in clinics and hospitals for use by non-laboratory personnel such as nurses. We developed a multiplexed qPCR-based sample-to-answer system with melting curve analysis capable of detecting up to 144 alleles utilizing the Kelliop RapidSeq126 PCR platform combined with a single-use microfluidic cartridge. The RapidSeq126 is the size of a standard desktop printer and the microfluidic cartridges are smaller than a deck of playing cards. Thus the system was deployable in the outpatient setting for clinical trials of MT-RNR1 genotyping. The sample (buccal swab from volunteers or plasmids in media) used for DNA extraction was placed in the cartridge sample inlet prior to inserting the cartridge into the RapidSeq126. The microfluidic cartridge was composed of heat resistant polymer with a sample inlet, 100um conduits, liquid and solid reagents, valves, extraction chamber, lyophilization chamber, 12 PCR reaction chambers, and a waste chamber. No human effort was required for processing the sample and performing the assay other than placing the sample in the cartridge and placing the cartridge in the RapidSeq126. The RapidSeq126 has demonstrated ex vivo detection in plasmids and in vivo detection from human volunteer samples of up to 144 alleles per microfluidic cartridge used and did not require specialized laboratory training to operate. Efficacy was proven for several applications, such as multiple microsatellite instability (MSI) sites (SULF/RYR3/MRE11/ACVR2A/DIDO1/SEC31A/BTBD7), endometrial cancer POLE exonuclease domain (EMD) mutation status, and G6PD variants such as those commonly associated with hemolysis (c.202G>A, c.376A>G, c.680G>A>T, c.968T>C, 404A>C, c.871G>A). The RapidSeq126 system was also able to identify the three MT-RNR1 variants associated with aminoglycoside-induced sensorineural hearing loss (m.1555A>G, m.1095T>C, m.1494C>T). Results were provided in under an hour in a sample-to-answer fashion requiring no processing other than inserting the cartridge with the sample into the RapidSeq126. Results were provided in a digital, HL7-compliant format suitable for interfacing with Electronic Healthcare Record (EHR). The RapidSeq126 system provides a solution for emergency and critical medical situations requiring results in a matter of minutes to hours. The HL7-compliant data format of results enables the RapidSeq126 to interface directly with EHRs to generate best practice advisories and further reduce errors and time to diagnosis by providing digital results.Keywords: genetic testing, pharmacogenomics, point of care testing, rapid genetic testing
Procedia PDF Downloads 182314 A Script for Presentation to the Management of a Teaching Hospital on DXplain Clinical Decision Support System
Authors: Jacob Nortey
Abstract:
Introduction: In recent years, there has been an enormous success in discoveries of scientific knowledge in medicine coupled with the advancement of technology. Despite all these successes, diagnoses and treatment of diseases have become complex. According to the Ibero – American Study of Adverse Effects (IBEAS), about 10% of hospital patients suffer from secondary damage during the care process, and approximately 2% die from this process. Many clinical decision support systems have been developed to help mitigate some healthcare medical errors. Method: Relevant databases were searched, including ones that were peculiar to the clinical decision support system (that is, using google scholar, Pub Med and general google searches). The articles were then screened for a comprehensive overview of the functionality, consultative style and statistical usage of Dxplain Clinical decision support systems. Results: Inferences drawn from the articles showed high usage of Dxplain clinical decision support system for problem-based learning among students in developed countries as against little or no usage among students in Low – and Middle – income Countries. The results also indicated high usage among general practitioners. Conclusion: Despite the challenges Dxplain presents, the benefits of its usage to clinicians and students are enormous.Keywords: dxplain, clinical decision support sytem, diagnosis, support systems
Procedia PDF Downloads 852313 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 1442312 Sexual Satifaction in Women with Polycystic Ovarian Syndrome
Authors: Nashi Khan, Amina Khalid
Abstract:
Aim: The purpose of this research was to find the psychiatric morbidity and level of sexual satisfaction among women with polycystic ovarian syndrome and their comparison with women with general medical conditions and to examine the correlation between psychiatric morbidity and sexual satisfaction among these women. Design: Cross sectional research design was used. Method: A total of 176 (M age = 30, SD = 5.83) women were recruited from both private and public sector hospitals in Pakistan. About 88 (50%) of the participants were diagnosed with polycystic ovarian syndrome (cases), whereas other 50% belonged to control group. Data were collected using semi structured interview. Sexual satisfaction scale for women (SSS-W) was administered to measure sexual satisfaction level and psychiatric morbidity was assessed by Symptom Checklist-Revised. Results: Results showed that participant’s depression and anxiety level had significant negative correlation with their sexual satisfaction level, whereas, anxiety and depression shared a significant positive correlation. There was a significant difference in the scores for sexual satisfaction, depression and anxiety for both cases and controls. These results suggested that women suffering from polycystic ovarian syndrome tend to be less sexually satisfied and experienced relatively more symptoms of depression and anxiety as compared to controls.Keywords: level of sexual satisfaction, psychiatric morbidity, polycystic ovarian syndrome
Procedia PDF Downloads 4662311 Geomorphology and Flood Analysis Using Light Detection and Ranging
Authors: George R. Puno, Eric N. Bruno
Abstract:
The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.Keywords: flooding, geomorphology, mapping, watershed
Procedia PDF Downloads 233