Search results for: reliability and validity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2552

Search results for: reliability and validity

302 Self-rated Health as a Predictor of Hospitalizations in Patients with Bipolar Disorder and Major Depression: A Prospective Cohort Study of the United Kingdom Biobank

Authors: Haoyu Zhao, Qianshu Ma, Min Xie, Yunqi Huang, Yunjia Liu, Huan Song, Hongsheng Gui, Mingli Li, Qiang Wang

Abstract:

Rationale: Bipolar disorder (BD) and major depressive disorder (MDD), as severe chronic illnesses that restrict patients’ psychosocial functioning and reduce their quality of life, are both categorized into mood disorders. Emerging evidence has suggested that the reliability of self-rated health (SRH) was wellvalidated and that the risk of various health outcomes, including mortality and health care costs, could be predicted by SRH. Compared with other lengthy multi-item patient-reported outcomes (PRO) measures, SRH was proven to have a comparable predictive ability to predict mortality and healthcare utilization. However, to our knowledge, no study has been conducted to assess the association between SRH and hospitalization among people with mental disorders. Therefore, our study aims to determine the association between SRH and subsequent all-cause hospitalizations in patients with BD and MDD. Methods: We conducted a prospective cohort study on people with BD or MDD in the UK from 2006 to 2010 using UK Biobank touchscreen questionnaire data and linked administrative health databases. The association between SRH and 2-year all-cause hospitalizations was assessed using proportional hazard regression after adjustment for sociodemographics, lifestyle behaviors, previous hospitalization use, the Elixhauser comorbidity index, and environmental factors. Results: A total of 29,966 participants were identified, experiencing 10,279 hospitalization events. Among the cohort, the average age was 55.88 (SD 8.01) years, 64.02% were female, and 3,029 (10.11%), 15,972 (53.30%), 8,313 (27.74%), and 2,652 (8.85%) reported excellent, good, fair, and poor SRH, respectively. Among patients reporting poor SRH, 54.19% had a hospitalization event within 2 years compared with 22.65% for those having excellent SRH. In the adjusted analysis, patients with good, fair, and poor SRH had 1.31 (95% CI 1.21-1.42), 1.82 (95% CI 1.68-1.98), and 2.45 (95% CI 2.22, 2.70) higher hazards of hospitalization, respectively, than those with excellent SRH. Conclusion: SRH was independently associated with subsequent all-cause hospitalizations in patients with BD or MDD. This large study facilitates rapid interpretation of SRH values and underscores the need for proactive SRH screening in this population, which might inform resource allocation and enhance high-risk population detection.

Keywords: severe mental illnesses, hospitalization, risk prediction, patient-reported outcomes

Procedia PDF Downloads 138
301 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 57
300 Measuring the Resilience of e-Governments Using an Ontology

Authors: Onyekachi Onwudike, Russell Lock, Iain Phillips

Abstract:

The variability that exists across governments, her departments and the provisioning of services has been areas of concern in the E-Government domain. There is a need for reuse and integration across government departments which are accompanied by varying degrees of risks and threats. There is also the need for assessment, prevention, preparation, response and recovery when dealing with these risks or threats. The ability of a government to cope with the emerging changes that occur within it is known as resilience. In order to forge ahead with concerted efforts to manage reuse and integration induced risks or threats to governments, the ambiguities contained within resilience must be addressed. Enhancing resilience in the E-Government domain is synonymous with reducing risks governments face with provisioning of services as well as reuse of components across departments. Therefore, it can be said that resilience is responsible for the reduction in government’s vulnerability to changes. In this paper, we present the use of the ontology to measure the resilience of governments. This ontology is made up of a well-defined construct for the taxonomy of resilience. A specific class known as ‘Resilience Requirements’ is added to the ontology. This class embraces the concept of resilience into the E-Government domain ontology. Considering that the E-Government domain is a highly complex one made up of different departments offering different services, the reliability and resilience of the E-Government domain have become more complex and critical to understand. We present questions that can help a government access how prepared they are in the face of risks and what steps can be taken to recover from them. These questions can be asked with the use of queries. The ontology focuses on developing a case study section that is used to explore ways in which government departments can become resilient to the different kinds of risks and threats they may face. A collection of resilience tools and resources have been developed in our ontology to encourage governments to take steps to prepare for emergencies and risks that a government may face with the integration of departments and reuse of components across government departments. To achieve this, the ontology has been extended by rules. We present two tools for understanding resilience in the E-Government domain as a risk analysis target and the output of these tools when applied to resilience in the E-Government domain. We introduce the classification of resilience using the defined taxonomy and modelling of existent relationships based on the defined taxonomy. The ontology is constructed on formal theory and it provides a semantic reference framework for the concept of resilience. Key terms which fall under the purview of resilience with respect to E-Governments are defined. Terms are made explicit and the relationships that exist between risks and resilience are made explicit. The overall aim of the ontology is to use it within standards that would be followed by all governments for government-based resilience measures.

Keywords: E-Government, Ontology, Relationships, Resilience, Risks, Threats

Procedia PDF Downloads 318
299 Debriefing Practices and Models: An Integrative Review

Authors: Judson P. LaGrone

Abstract:

Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.

Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education

Procedia PDF Downloads 131
298 Reactions of 4-Aryl-1H-1,2,3-Triazoles with Cycloalkenones and Epoxides: Synthesis of 2,4- and 1,4-Disubstituted 1,2,3-Triazoles

Authors: Ujjawal Kumar Bhagat, Kamaluddin, Rama Krishna Peddinti

Abstract:

The Huisgen’s 1,3-dipolar [3+2] cycloaddition of organic azides and alkynes often give the mixtures of both the regioisomers 1,4- and 1,5- disubstituted 1,2,3-triazoles. Later, in presence of metal salts (click chemistry) such as copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) was used for the synthesis of 1,4-disubstituted 1,2,3-triazoles as a sole products regioselectively. Also, the ‘click reactions’ of Ruthenium-catalyzed azides-alkynes cycloaddition (RuAAC) is used for the synthesis of 1,5-disubstituted 1,2,3-triazoles as a single isomer. The synthesis of 1,4- and 1.5-disubstituted 1,2,3-triazoles has become the gold standard of ‘click chemistry’ due to its reliability, specificity, and biocompatibility. The 1,4- and 1,5-disubstituted 1,2,3-triazoles have emerged as one of the most powerful entities in the varieties of biological properties like antibacterial, antitubercular, antitumor, antifungal and antiprotozoal activities. Some of the 1,4,5-trisubstituted 1,2,3-triazoles exhibit Hsp90 inhibiting properties. The 1,4-disubstituted 1,2,3-triazoles also play a big role in the area of material sciences. The triazole-derived oligomeric, polymeric structures are the potential materials for the preparation of organic optoelectronics, silicon elastomers and unimolecular block copolymers. By the virtue of hydrogen bonding and dipole interactions, the 1,2,3-triazole moiety readily associates with the biological targets. Since, the 4-aryl-1H-1,2,3-triazoles are stable entities, they are chemically robust and very less reactive. In this regard, the addition of 4-aryl-1H-1,2,3-triazoles as nucleophiles to α,β-unsaturated carbonyls and nucleophilic substitution with the epoxides constitutes a powerful and challenging synthetic approach for the generation of disubstituted 1,2,3-triazoles. Herein, we have developed aza-Michael addition of 4-aryl-1H-1,2,3-triazoles to 2-cycloalken-1-ones in the presence of an organic base (DABCO) in acetonotrile solvent leading to the formation of disubstituted 1,2,3-triazoles. The reaction provides 1,4-disubstituted triazoles, 3-(4-aryl-1H-1,2,3-triazol-1-yl)cycloalkanones in major amount along with 1,5-disubstituted 1,2,3-triazoles, minor regioisomers with excellent combined chemical yields (upto99%). The nucleophilic behavior of 4-aryl-1H-1,2,3-triazoles was also tested in the ring opening of meso-epoxides in the presence of organic bases (DABCO/Et3N) in acetonotrile solvent furnishing the two regioisomers1,4- and 1,5-disubstituted 1,2,3-triazoles. Thus, the novelty of this methodology is synthesis of diversified disubstituted 1,2,3-triazoles under metal free condition.The results will be presented in detail.

Keywords: aza-Michael addition, cycloalkenones, epoxides, triazoles

Procedia PDF Downloads 298
297 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 127
296 A Case Study of Determining the Times of Overhauls and the Number of Spare Parts for Repairable Items in Rolling Stocks with Simulation

Authors: Ji Young Lee, Jong Woon Kim

Abstract:

It is essential to secure high availability of railway vehicles to realize high quality and efficiency of railway service. Once the availability decreased, planned railway service could not be provided or more cars need to be reserved. additional cars need to be purchased or the frequency of railway service could be decreased. Such situation would be a big loss in terms of quality and cost related to railway service. Therefore, we make various efforts to get high availability of railway vehicles. Because it is a big loss to operators, we make various efforts to get high availability of railway vehicles. To secure high availability, the idle time of the vehicle needs to be reduced and the following methods are applied to railway vehicles. First, through modularization design, exchange time for line replaceable units is reduced which makes railway vehicles could be put into the service quickly. Second, to reduce periodic preventive maintenance time, preventive maintenance with short period would be proceeded test oriented to minimize the maintenance time, and reliability is secured through overhauls for each main component. With such design changes for railway vehicles, modularized components are exchanged first at the time of vehicle failure or overhaul so that vehicles could be put into the service quickly and exchanged components are repaired or overhauled. Therefore, spare components are required for any future failures or overhauls. And, as components are modularized and costs for components are high, it is considerably important to get reasonable quantities of spare components. Especially, when a number of railway vehicles were put into the service simultaneously, the time of overhauls come almost at the same time. Thus, for some vehicles, components need to be exchanged and overhauled before appointed overhaul period so that these components could be secured as spare parts for the next vehicle’s component overhaul. For this reason, components overhaul time and spare parts quantities should be decided at the same time. This study deals with the time of overhauls for repairable components of railway vehicles and the calculation of spare parts quantities in consideration of future failure/overhauls. However, as railway vehicles are used according to the service schedule, maintenance work cannot be proceeded after the service was closed thus it is quite difficult to resolve this situation mathematically. In this study, Simulation software system is used in this study for analyzing the time of overhauls for repairable components of railway vehicles and the spare parts for the railway systems.

Keywords: overhaul time, rolling stocks, simulation, spare parts

Procedia PDF Downloads 309
295 Active Power Filters and their Smart Grid Integration - Applications for Smart Cities

Authors: Pedro Esteban

Abstract:

Most installations nowadays are exposed to many power quality problems, and they also face numerous challenges to comply with grid code and energy efficiency requirements. The reason behind this is that they are not designed to support nonlinear, non-balanced, and variable loads and generators that make up a large percentage of modern electric power systems. These problems and challenges become especially critical when designing green buildings and smart cities. These problems and challenges are caused by equipment that can be typically found in these installations like variable speed drives (VSD), transformers, lighting, battery chargers, double-conversion UPS (uninterruptible power supply) systems, highly dynamic loads, single-phase loads, fossil fuel generators and renewable generation sources, to name a few. Moreover, events like capacitor switching (from existing capacitor banks or passive harmonic filters), auto-reclose operations of transmission and distribution lines, or the starting of large motors also contribute to these problems and challenges. Active power filters (APF) are one of the fastest-growing power electronics technologies for solving power quality problems and meeting grid code and energy efficiency requirements for a wide range of segments and applications. They are a high performance, flexible, compact, modular, and cost-effective type of power electronics solutions that provide an instantaneous and effective response in low or high voltage electric power systems. They enable longer equipment lifetime, higher process reliability, improved power system capacity and stability, and reduced energy losses, complying with most demanding power quality and energy efficiency standards and grid codes. There can be found several types of active power filters, including active harmonic filters (AHF), static var generators (SVG), active load balancers (ALB), hybrid var compensators (HVC), and low harmonic drives (LHD) nowadays. All these devices can be used in applications in Smart Cities bringing several technical and economic benefits.

Keywords: power quality improvement, energy efficiency, grid code compliance, green buildings, smart cities

Procedia PDF Downloads 93
294 Participatory Budgeting in South African Local Government: A Right or Illusion

Authors: Oliver Fuo

Abstract:

One of the central features of post-apartheid constitutional reform was the establishment of local government as a distinct sphere of government in the Constitution of the Republic of South Africa, 1996. Local government, constituted by about 279 wall-to-wall municipalities, have legislative and executive powers vested in democratically elected municipal councils to govern areas within their jurisdiction subject only to limits imposed by the Constitution. In addition, unlike the past where municipalities merely played a service delivery role, they are now mandated to realise an expanded developmental mandate – pursue social justice and sustainable development; contribute, together with national and provincial government, to the realisation of socio-economic rights entrenched in the Bill of Rights; and facilitate public participation in local governance. In order to finance their developmental programmes, municipalities receive equitable allocations from national government and have legal powers to generate additional finances by charging rates on property and imposing surcharges on services provided. In addition to its general obligation to foster public participation in local governance, the law requires municipalities to facilitate public participation in their budgeting processes. This requirement is generally consistent with recent trends in local government democratic reforms which call for inclusive budget planning and implementation whereby citizens, civil society and NGOs participate in the allocation of resources. This trend is best captured in the concept of participatory budgeting. This paper specifically analyses the legal and policy framework for participatory budgeting at the local government level in South Africa. Using Borbet South Africa (Pty) Ltd and Others v Nelson Mandela Bay Municipality 2014 (5) SA 256 (ECP) as an example, this paper argues that the legal framework for participatory budgeting creates an illusory right for citizens to participate in municipal budgeting processes. This challenge is further compounded by the barrenness of the jurisprudence of courts that interpret the obligation of municipalities in this regard. It is submitted that the wording of s 27(4) of the Municipal Finance Management Act (MFMA) 53 of 2003 - which expressly stipulates that non-compliance by a municipality with a provision relating to the budget process or a provision in any legislation relating to the approval of a budget-related policy, does not affect the validity of an annual or adjustments budget – is problematic as it seems to trivialise the obligation to facilitate public participation in budgeting processes. It is submitted that where this provision is abused by municipal officials, this could lead to the sidelining of the real interests of communities in local budgets. This research is based on a critical and integrated review of primary and secondary sources of law.

Keywords: courts and jurisprudence, local government law, participatory budgeting, South Africa

Procedia PDF Downloads 357
293 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film

Authors: Nalla Somaiah, Praveen Kumar

Abstract:

Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.

Keywords: Blech structure, electromigration, temperature gradient, thin films

Procedia PDF Downloads 235
292 Reliability of Dry Tissues Sampled from Exhumed Bodies in DNA Analysis

Authors: V. Agostini, S. Gino, S. Inturri, A. Piccinini

Abstract:

In cases of corpse identification or parental testing performed on exhumed alleged dead father, usually, we seek and acquire organic samples as bones and/or bone fragments, teeth, nails and muscle’s fragments. The DNA analysis of these cadaveric matrices usually leads to identifying success, but it often happens that the results of the typing are not satisfactory with highly degraded, partial or even non-interpretable genetic profiles. To aggravate the interpretative panorama deriving from the analysis of such 'classical' organic matrices, we must add a long and laborious treatment of the sample that starts from the mechanical fragmentation up to the protracted decalcification phase. These steps greatly increase the chance of sample contamination. In the present work, instead, we want to report the use of 'unusual' cadaveric matrices, demonstrating that their forensic genetics analysis can lead to better results in less time and with lower costs of reagents. We report six case reports, result of on-field experience, in which eyeswabs and cartilage were sampled and analyzed, allowing to obtain clear single genetic profiles, useful for identification purposes. In all cases we used the standard DNA tissue extraction protocols (as reported on the user manuals of the manufacturers such as QIAGEN or Invitrogen- Thermo Fisher Scientific), thus bypassing the long and difficult phases of mechanical fragmentation and decalcification of bones' samples. PCR was carried out using PowerPlex® Fusion System kit (Promega), and capillary electrophoresis was carried out on an ABI PRISM® 310 Genetic Analyzer (Applied Biosystems®), with GeneMapper ID v3.2.1 (Applied Biosystems®) software. The software Familias (version 3.1.3) was employed for kinship analysis. The genetic results achieved have proved to be much better than the analysis of bones or nails, both from the qualitative and quantitative point of view and from the point of view of costs and timing. This way, by using the standard procedure of DNA extraction from tissue, it is possible to obtain, in a shorter time and with maximum efficiency, an excellent genetic profile, which proves to be useful and can be easily decoded for later paternity tests and/or identification of human remains.

Keywords: DNA, eye swabs and cartilage, identification human remains, paternity testing

Procedia PDF Downloads 89
291 The Effect of Air Filter Performance on Gas Turbine Operation

Authors: Iyad Al-Attar

Abstract:

Air filters are widely used in gas turbines applications to ensure that the large mass (500kg/s) of clean air reach the compressor. The continuous demand of high availability and reliability has highlighted the critical role of air filter performance in providing enhanced air quality. In addition to being challenged with different environments [tropical, coastal, hot], gas turbines confront wide array of atmospheric contaminants with various concentrations and particle size distributions that would lead to performance degradation and components deterioration. Therefore, the role of air filters is of a paramount importance since fouled compressor can reduce power output and availability of the gas turbine to over 70 % throughout operation. Consequently, accurate filter performance prediction is critical tool in their selection considering their role in minimizing the economic impact of outages. In fact, actual performance of Efficient Particulate Air [EPA] filters used in gas turbine tend to deviate from the performance predicted by laboratory results. This experimental work investigates the initial pressure drop and fractional efficiency curves of full-scale pleated V-shaped EPA filters used globally in gas turbine. The investigation involved examining the effect of different operational conditions such as flow rates [500 to 5000 m3/h] and design parameters such as pleat count [28, 30, 32 and 34 pleats per 100mm]. This experimental work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase of flow rates and pleat density. The reasons, which led to surface area losses of filtration media, are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. This paper also demonstrates that the effect of increasing the flow rate has more pronounced effect on filter performance compared to pleating density. This experimental work suggests that a valid comparison of the pleat densities should be based on the effective surface area, namely, the area that participates in the filtration process, and not the total surface area the pleat density provides. Throughout this study, optimal pleat count that satisfies both initial pressure drop and efficiency requirements may not have necessarily existed.

Keywords: filter efficiency, EPA Filters, pressure drop, permeability

Procedia PDF Downloads 217
290 Energy Efficient Refrigerator

Authors: Jagannath Koravadi, Archith Gupta

Abstract:

In a world with constantly growing energy prices, and growing concerns about the global climate changes caused by increased energy consumption, it is becoming more and more essential to save energy wherever possible. Refrigeration systems are one of the major and bulk energy consuming systems now-a-days in industrial sectors, residential sectors and household environment. Refrigeration systems with considerable cooling requirements consume a large amount of electricity and thereby contribute greatly to the running costs. Therefore, a great deal of attention is being paid towards improvement of the performance of the refrigeration systems in this regard throughout the world. The Coefficient of Performance (COP) of a refrigeration system is used for determining the system's overall efficiency. The operating cost to the consumer and the overall environmental impact of a refrigeration system in turn depends on the COP or efficiency of the system. The COP of a refrigeration system should therefore be as high as possible. Slight modifications in the technical elements of the modern refrigeration systems have the potential to reduce the energy consumption, and improvements in simple operational practices with minimal expenses can have beneficial impact on COP of the system. Thus, the challenge is to determine the changes that can be made in a refrigeration system in order to improve its performance, reduce operating costs and power requirement, improve environmental outcomes, and achieve a higher COP. The opportunity here, and a better solution to this challenge, will be to incorporate modifications in conventional refrigeration systems for saving energy. Energy efficiency, in addition to improvement of COP, can deliver a range of savings such as reduced operation and maintenance costs, improved system reliability, improved safety, increased productivity, better matching of refrigeration load and equipment capacity, reduced resource consumption and greenhouse gas emissions, better working environment, and reduced energy costs. The present work aims at fabricating a working model of a refrigerator that will provide for effective heat recovery from superheated refrigerant with the help of an efficient de-superheater. The temperature of the refrigerant and water in the de-super heater at different intervals of time are measured to determine the quantity of waste heat recovered. It is found that the COP of the system improves by about 6% with the de-superheater and the power input to the compressor decreases by 4 % and also the refrigeration capacity increases by 4%.

Keywords: coefficiency of performance, de-superheater, refrigerant, refrigeration capacity, heat recovery

Procedia PDF Downloads 303
289 Flexible Current Collectors for Printed Primary Batteries

Authors: Vikas Kumar

Abstract:

Portable batteries are reliable source of mobile energy to power smart wearable electronics, medical devices, communications, and others internet of thing (IoT) devices. There is a continuous increase in demand for thinner, more flexible battery with high energy density and reliability to meet the requirement. For a flexible battery, factors that affect these properties are the stability of current collectors, electrode materials and their interfaces with the corrosive electrolytes. State-of-the-art conventional and flexible batteries utilise carbon as an electrode and current collectors which cause high internal resistance (~100 ohms) and limit the peak current to ~1mA. This makes them unsuitable for a wide range of applications. Replacing the carbon parts with metallic components would reduce the internal resistance (and hence reduce parasitic loss), but significantly increases the risk of corrosion due to galvanic interactions within the battery. To overcome these challenges, low cost electroplated nickel (Ni) on copper (Cu) was studied as a potential anode current collector for a zinc-manganese oxide primary battery with different concentration of NH4Cl/ZnCl2 electrolyte. Using electrical impedance spectroscopy (EIS), we monitored the open circuit potential (OCP) of electroplated nickel (different thicknesses) in different concentration of electrolytes to optimise the thickness of Ni coating. Our results show that electroless Ni coating suffer excessive corrosion in these electrolytes. Corrosion rates of Ni coatings for different concentrations of electrolytes have been calculated with Tafel analysis. These results suggest that for electroplated Ni, channelling and/or open porosity is a major issue, which was confirmed by morphological analysis. These channels are an easy pathway for electrolyte to penetrate thorough Ni to corrode the Ni/Cu interface completely. We further investigated the incorporation of a special printed graphene layer on Ni to provide corrosion protection in this corrosive electrolyte medium. We find that the incorporation of printed graphene layer provides the corrosion protection to the Ni and enhances the chemical bonding between the active materials and current collector and also decreases the overall internal resistance of the battery system.

Keywords: corrosion, electrical impedance spectroscopy, flexible battery, graphene, metal current collector

Procedia PDF Downloads 105
288 Emigration Improves Life Standard of Families Left Behind: An Evidence from Rural Area of Gujrat-Pakistan

Authors: Shoaib Rasool

Abstract:

Migration trends in rural areas of Gujrat are increasing day by day among illiterate people as they consider it as a source of attraction and charm of destination. It affects the life standard both positive and negative way to their families left behind in the context of poverty, socio-economic status and life standards. It also promotes material items and as well as social indicators of living, housing conditions, schooling of their children’s, health seeking behavior and to some extent their family environment. The nature of the present study is to analyze socio-economic conditions regarding life standard of emigrant families left behind in rural areas of Gujrat district, Pakistan. A survey design was used on 150 families selected from rural areas of Gujrat districts through purposive sampling technique. A well-structured questionnaire was administered by the researcher to explore the nature of the study and for further data collection process. The measurement tool was pretested on 20 families to check the workability and reliability before the actual data collection. Statistical tests were applied to draw results and conclusion. The preliminary findings of the study show that emigration has left deep social-economic impacts on life standards of rural families left behind in Gujrat. They improved their life status and living standard through remittances. Emigration is one of the major sources of development of economy of household and it also alleviate poverty at house household level as well as community and country level. The rationale behind migration varies individually and geographically. There are popular considered attractions in Pakistan includes securing high status, improvement in health condition, coping other, getting married then to acquire nationality, using the unfair means, opting educational visas etc. Emigrants are not only sending remittances but also returning with newly acquired skills and valuable knowledge to their country of origin because emigrants learn new methods of living and working. There are also women migrants who experience social downward mobility by engaging in jobs that are beneath their educational qualifications.

Keywords: emigration, life standard, families, left behind, rural area, Gujrat

Procedia PDF Downloads 420
287 Linking Information Systems Capabilities for Service Quality: The Role of Customer Connection and Environmental Dynamism

Authors: Teng Teng, Christos Tsinopoulos

Abstract:

The purpose of this research is to explore the link between IS capabilities, customer connection, and quality performance in the service context, with investigation of the impact of firm’s stable and dynamic environments. The application of Information Systems (IS) has become a significant effect on contemporary service operations. Firms invest in IS with the presumption that they will facilitate operations processes so that their performance will improve. Yet, IS resources by themselves are not sufficiently 'unique' and thus, it would be more useful and theoretically relevant to focus on the processes they affect. One such organisational process, which has attracted a lot of research attention by supply chain management scholars, is the integration of customer connection, where IS-enabled customer connection enhances communication and contact processes, and with such customer resources integration comes greater success for the firm in its abilities to develop a good understanding of customer needs and set accurate customer. Nevertheless, prior studies on IS capabilities have focused on either one specific type of technology or operationalised it as a highly aggregated concept. Moreover, although conceptual frameworks have been identified to show customer integration is valuable in service provision, there is much to learn about the practices of integrating customer resources. In this research, IS capabilities have been broken down into three dimensions based on the framework of Wade and Hulland: IT for supply chain activities (ITSCA), flexible IT infrastructure (ITINF), and IT operations shared knowledge (ITOSK); and focus on their impact on operational performance of firms in services. With this background, this paper addresses the following questions: -How do IS capabilities affect the integration of customer connection and service quality? -What is the relationship between environmental dynamism and the relationship of customer connection and service quality? A survey of 156 service establishments was conducted, and the data analysed to determine the role of customer connection in mediating the effects of IS capabilities on firms’ service quality. Confirmatory factor analysis was used to check convergent validity. There is a good model fit for the structural model. Moderating effect of environmental dynamism on the relationship of customer connection and service quality is analysed. Results show that ITSCA, ITINF, and ITOSK have a positive influence on the degree of the integration of customer connection. In addition, customer connection positively related to service quality; this relationship is further emphasised when firms work in a dynamic environment. This research takes a step towards quelling concerns about the business value of IS, contributing to the development and validation of the measurement of IS capabilities in the service operations context. Additionally, it adds to the emerging body of literature linking customer connection to the operational performance of service firms. Managers of service firms should consider the strength of the mediating role of customer connection when investing in IT-related technologies and policies. Particularly, service firms developing IS capabilities should simultaneously implement processes that encourage supply chain integration.

Keywords: customer connection, environmental dynamism, information systems capabilities, service quality, service supply chain

Procedia PDF Downloads 120
286 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 212
285 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 43
284 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid

Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov

Abstract:

Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.

Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics

Procedia PDF Downloads 58
283 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 42
282 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students

Authors: Majed A. Alsalem

Abstract:

In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.

Keywords: deaf and hard of hearing, digital books, literacy, technology

Procedia PDF Downloads 465
281 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.

Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)

Procedia PDF Downloads 108
280 A Comparative Study on the Positive and Negative of Electronic Word-of-Mouth on the SERVQUAL Scale-Take A Certain Armed Forces General Hospital in Taiwan As An Example

Authors: Po-Chun Lee, Li-Lin Liang, Ching-Yuan Huang

Abstract:

Purpose: Research on electronic word-of-mouth (eWOM)& online review has been widely used in service industry management research in recent years. The SERVQUAL scale is the most commonly used method to measure service quality. Therefore, the purpose of this research is to combine electronic word of mouth & online review with the SERVQUAL scale. To explore the comparative study of positive and negative electronic word-of-mouth reviews of a certain armed force general hospital in Taiwan. Data sources: This research obtained online word-of-mouth comment data on google maps from a military hospital in Taiwan in the past ten years through Internet data mining technology. Research methods: This study uses the semantic content analysis method to classify word-of-mouth reviews according to the revised PZB SERVQUAL scale. Then carry out statistical analysis. Results of data synthesis: The results of this study disclosed that the negative reviews of this military hospital in Taiwan have been increasing year by year. Under the COVID-19 epidemic, positive word-of-mouth has a downward trend. Among the five determiners of SERVQUAL of PZB, positive word-of-mouth reviews performed best in “Assurance,” with a positive review rate of 58.89%, Followed by 43.33% of “Responsiveness.” In negative word-of-mouth reviews, “Assurance” performed the worst, with a positive rate of 70.99%, followed by responsive 29.01%. Conclusions: The important conclusions of this study disclosed that the total number of electronic word-of-mouth reviews of the military hospital has revealed positive growth in recent years, and the positive word-of-mouth growth has revealed negative growth after the epidemic of COVID-19, while the negative word-of-mouth has grown substantially. Regardless of the positive and negative comments, what patients care most about is “Assurance” of the professional attitude and skills of the medical staff, which needs to be strengthened most urgently. In addition, good “Reliability” will help build positive word-of-mouth. However, poor “Responsiveness” can easily lead to the spread of negative word-of-mouth. This study suggests that the hospital should focus on these few service-oriented quality management and audits.

Keywords: quality of medical service, electronic word-of-mouth, armed forces general hospital

Procedia PDF Downloads 153
279 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials

Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar

Abstract:

Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.

Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)

Procedia PDF Downloads 366
278 An Evaluation and Guidance for mHealth Apps

Authors: Tareq Aljaber

Abstract:

The number of mobile health apps is growing at a fast frequency as it's nearly doubled in a year between 2015 and 2016. Though, there is a lack of an effective evaluation framework to verify the usability and reliability of mobile phone health education applications which would help saving time and effort for the numerous user groups. This abstract describing a framework for evaluating mobile applications in specifically mobile health education applications, along with a guidance select tool to assist different users to select the most suitable mobile health education apps. The effective framework outcome is intended to meet the requirements and needs of the different stakeholder groups additionally to enhancing the development of mobile health education applications with software engineering approaches, by producing new and more effective techniques to evaluate such software. This abstract highlights the significance and consequences of mobile health education apps, before focusing the light on the required to create an effective evaluation framework for these apps. An explanation of the effective evaluation framework is going to be delivered in the abstract, beside with some specific evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) metrics to enable the determination of the usefulness and usability of health education mobile apps. Moreover, an explanation of the qualitative and quantitative outcomes for the effective evaluation framework was accomplished using Epocrates mobile phone app in addition to some other mobile phone apps. This proposed framework-An Evaluation Framework for Mobile Health Education Apps-consists of a hybrid of 5 metrics designated from a larger set in usability evaluation and heuristic evaluation, illuminated grounded on 15 unstructured interviews from software developers (SD), health professionals (HP) and patients (P). These five metrics corresponding to explicit facets of usability recognised through a requirements analysis of typical stakeholders of mobile health apps. These five hybrid selected metrics were scattered across 24 specific questionnaire questions, which are available on request from first author. This questionnaire has been sent to 81 participants distributed in three sets of stakeholders from software developers (SD), health professionals (HP) and patients/general users (P/GU) on the purpose of ranking three sets of mobile health education applications. Finally, the outcomes from the questionnaire data helped us to approach our aims which are finding the profile for different stakeholders, finding the profile for different mobile health educations application packages, ranking different mobile health education application and guide us to build the select guidance too which is apart from the Evaluation Framework for Mobile Health Education Apps.

Keywords: evaluation framework, heuristic evaluation, usability evaluation, metrics

Procedia PDF Downloads 377
277 Solar Photovoltaic Foundation Design

Authors: Daniel John Avutia

Abstract:

Solar Photovoltaic (PV) development is reliant on the sunlight hours available in a particular region to generate electricity. A potential area is assessed through its inherent solar radiation intensity measured in watts per square meter. Solar energy development involves the feasibility, design, construction, operation and maintenance of the relevant infrastructure, but this paper will focus on the design and construction aspects. Africa and Australasia have the longest sunlight hours per day and the highest solar radiation per square meter, 7 sunlight hours/day and 5 kWh/day respectively. Solar PV support configurations consist of fixed-tilt support and tracker system structures, the differentiation being that the latter was introduced to improve the power generation efficiency of the former due to the sun tracking movement capabilities. The installation of Solar PV foundations involves rammed piles, drilling/grout piles and shallow raft reinforced concrete structures. This paper presents a case study of 2 solar PV projects in Africa and Australia, discussing the foundation design consideration and associated construction cost implications of the selected foundations systems. Solar PV foundations represent up to one fifth of the civil works costs in a project. Therefore, the selection of the most structurally sound and feasible foundation for the prevailing ground conditions is critical towards solar PV development. The design wind speed measured by anemometers govern the pile embedment depth for rammed and drill/grout foundation systems. The lateral pile deflection and vertical pull out resistance of piles increase proportionally with the embedment depth for uniform pile geometry and geology. The pile driving rate may also be used to anticipate the lateral resistance and skin friction restraining the pile. Rammed pile foundations are the most structurally suitable due to the pile skin friction and ease of installation in various geological conditions. The competitiveness of solar PV projects within the renewable energy mix is governed by lowering capital expenditure, improving power generation efficiency and power storage technological advances. The power generation reliability and efficiency are areas for further research within the renewable energy niche.

Keywords: design, foundations, piles, solar

Procedia PDF Downloads 165
276 Validating Chronic Kidney Disease-Specific Risk Factors for Cardiovascular Events Using National Data: A Retrospective Cohort Study of the Nationwide Inpatient Sample

Authors: Fidelis E. Uwumiro, Chimaobi O. Nwevo, Favour O. Osemwota, Victory O. Okpujie, Emeka S. Obi, Omamuyovbi F. Nwoagbe, Ejiroghene Tejere, Joycelyn Adjei-Mensah, Christopher N. Ekeh, Charles T. Ogbodo

Abstract:

Several risk factors associated with cardiovascular events have been identified as specific to Chronic Kidney Disease (CKD). This study endeavors to validate these CKD-specific risk factors using up-to-date national-level data, thereby highlighting the crucial significance of confirming the validity and generalizability of findings obtained from previous studies conducted on smaller patient populations. The study utilized the nationwide inpatient sample database to identify adult hospitalizations for CKD from 2016 to 2020, employing validated ICD-10-CM/PCS codes. A comprehensive literature review was conducted to identify both traditional and CKD-specific risk factors associated with cardiovascular events. Risk factors and cardiovascular events were defined using a combination of ICD-10-CM/PCS codes and statistical commands. Only risk factors with specific ICD-10 codes and hospitalizations with complete data were included in the study. Cardiovascular events of interest included cardiac arrhythmias, sudden cardiac death, acute heart failure, and acute coronary syndromes. Univariate and multivariate regression models were employed to evaluate the association between chronic kidney disease-specific risk factors and cardiovascular events while adjusting for the impact of traditional CV risk factors such as old age, hypertension, diabetes, hypercholesterolemia, inactivity, and smoking. A total of 690,375 hospitalizations for CKD were included in the analysis. The study population was predominantly male (375,564, 54.4%) and primarily received care at urban teaching hospitals (512,258, 74.2%). The mean age of the study population was 61 years (SD 0.1), and 86.7% (598,555) had a CCI of 3 or more. At least one traditional risk factor for CV events was present in 84.1% of all hospitalizations (580,605), while 65.4% (451,505) included at least one CKD-specific risk factor for CV events. The incidence of CV events in the study was as follows: acute coronary syndromes (41,422; 6%), sudden cardiac death (13,807; 2%), heart failure (404,560; 58.6%), and cardiac arrhythmias (124,267; 18%). 91.7% (113,912) of all cardiac arrhythmias were atrial fibrillations. Significant odds of cardiovascular events on multivariate analyses included: malnutrition (aOR: 1.09; 95% CI: 1.06–1.13; p<0.001), post-dialytic hypotension (aOR: 1.34; 95% CI: 1.26–1.42; p<0.001), thrombophilia (aOR: 1.46; 95% CI: 1.29–1.65; p<0.001), sleep disorder (aOR: 1.17; 95% CI: 1.09–1.25; p<0.001), and post-renal transplant immunosuppressive therapy (aOR: 1.39; 95% CI: 1.26–1.53; p<0.001). The study validated malnutrition, post-dialytic hypotension, thrombophilia, sleep disorders, and post-renal transplant immunosuppressive therapy, highlighting their association with increased risk for cardiovascular events in CKD patients. No significant association was observed between uremic syndrome, hyperhomocysteinemia, hyperuricemia, hypertriglyceridemia, leptin levels, carnitine deficiency, anemia, and the odds of experiencing cardiovascular events.

Keywords: cardiovascular events, cardiovascular risk factors in CKD, chronic kidney disease, nationwide inpatient sample

Procedia PDF Downloads 46
275 Accreditation and Quality Assurance of Nigerian Universities: The Management Imperative

Authors: F. O Anugom

Abstract:

The general functions of the university amongst other things include teaching, research and community service. Universities are recognized as the apex of learning, accumulating and imparting knowledge and skills of all kinds to students to enable them to be productive, earn their living and to make optimum contributions to national development. This is equivalent to the production of human capital in the form of high level manpower needed to administer the educational society, be useful to the society and manage the economy. Quality has become a matter of major importance for university education in Nigeria. Accreditation is the systematic review of educational programs to ensure that acceptable standards of education, scholarship and infrastructure are being maintained. Accreditation ensures that institution maintain quality. The process is designed to determine whether or not an institution has met or exceeded the published standards for accreditation, and whether it is achieving its mission and stated purposes. Ensuring quality assurance in accreditation process falls in the hands of university management which justified the need for this study. This study examined accreditation and quality assurance: the management imperative. Three research questions and three hypotheses guided the study. The design was a correlation survey with a population of 2,893 university administrators out of which 578 Heads of department and Dean of faculties were sampled. The instrument for data collection was titled Programme Accreditation Exercise scale with high levels of reliability. The research questions were answered with Pearson ‘r’ statistics. T-test statistics was used to test the hypotheses. It was found among others that the quality of accredited programme depends on the level of funding of universities in Nigeria. It was also indicated that quality of programme accreditation and physical facilities of universities in Nigeria have high relationship. But it was also revealed that programme accreditation is positively related to staffing in Nigerian universities. Based on the findings of the study, the researcher recommend that academic administrators should be included in the team of those who ensure quality programs in the universities. Private sector partnership should be encouraged to fund programs to ensure quality of programme in the universities. Independent agencies should be engaged to monitor the activities of accreditation teams to avoid bias.

Keywords: accreditation, quality assurance, national universities commission , physical facilities, staffing

Procedia PDF Downloads 177
274 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 401
273 Psychological Wellbeing, Lifestyle, and Negative and Positive Effects among Adults

Authors: Rahat Zaman

Abstract:

The present study was conducted to investigate psychological well-being and positive and negative affect among adults. The sample comprised 221 adults; the sample was collected from all over Pakistan. Psychological well-being was measured with the help of the psychological well-being scale developed by Ryff and Keyes (1995). Lifestyle was measured with the help of the Health Promoting Lifestyle Profile Scale developed by Walker et al. (1995). Positive and negative effects were measured by PANAS, developed by Watson, Clark, and Tellegen (1998). To check the properties of scale, the alpha reliability coefficient was calculated. To test the hypotheses of the research, correlation, independent sample t-rest, and ANOVA were computed. It was hypothesized that there would be a positive relationship between psychological well-being and lifestyles and positive affect. The results show that psychological well-being, lifestyle, and positive affect are positively related. This also supports our hypothesis. The research also searched for relationships in the study variables according to the demographics of the sample. The respondents varied according to their dominant affect levels with respect to their psychological well-being and lifestyles. The research found significant differences for the genders in life appreciation, nutrition, and negative affect. Single and married individuals differed significantly on autonomy, environmental mastery, life appreciation, nutrition, and stress management. Individuals showed significant differences with respect to their living situation, joint and nuclear family members showed significant differences in personal growth, autonomy, health responsibilities, social support, physical activities, and stress management. The sample showed significant differences in environmental mastery, personal growth, purpose in life, life appreciation, health responsibilities, physical activities, stress management, and negative affect when divided in socioeconomic status. Age-wise analysis showed significant differences in autonomy, personal growth, purpose in life, life appreciation, nutrition, and stress management. Provincially significant differences were found in life appreciation, nutrition, social support, physical activities, and stress management, and both positive and negative effects were experienced. Implications of the results are discussed.

Keywords: wellbeing, healthy lifestyle, self acceptance, positive

Procedia PDF Downloads 44