Search results for: Riener muscle model
8774 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers
Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck
Abstract:
The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.Keywords: fatigue, influence line, life cycle, tied arch bridge
Procedia PDF Downloads 3308773 Investigation of Existing Guidelines for Four-Legged Angular Telecommunication Tower
Authors: Sankara Ganesh Dhoopam, Phaneendra Aduri
Abstract:
Lattice towers are light weight structures which are primarily governed by the effects of wind loading. Ensuring a precise assessment of wind loads on the tower structure, antennas, and associated equipment is vital for the safety and efficiency of tower design. Earlier, the Indian standards are not available for design of telecom towers. Instead, the industry conventionally relied on the general building wind loading standard for calculating loads on tower components and the transmission line tower design standard for designing the angular members of the towers. Subsequently, the Bureau of Indian Standards (BIS) revised these standards and angular member design standard. While the transmission line towers are designed using the above standard, a full-scale model test will be done to prove the design. Telecom angular towers are also designed using the same with overload factor/factor of safety without full scale tower model testing. General construction in steel design code is available with limit state design approach and is applicable to the design of general structures involving angles and tubes but not used for angle member design of towers. Recently, in response to the evolving industry needs, the Bureau of Indian Standards (BIS) introduced a new standard titled “Isolated Towers, Masts, and Poles using structural steel -Code of practice” for the design of telecom towers. This study focuses on a 40m four legged angular tower to compare loading calculations and member designs between old and new standards. Additionally, a comparative analysis aligning with the new code provisions with international loading and design standards with a specific focus on American standards has been carried out. This paper elaborates code-based provisions used for load and member design calculations, including the influence of "ka" area averaging factor introduced in new wind load case.Keywords: telecom, angular tower, PLS tower, GSM antenna, microwave antenna, IS 875(Part-3):2015, IS 802(Part-1/sec-2):2016, IS 800:2007, IS 17740:2022, ANSI/TIA-222G, ANSI/TIA-222H.
Procedia PDF Downloads 838772 Prediction of Wind Speed by Artificial Neural Networks for Energy Application
Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui
Abstract:
In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed
Procedia PDF Downloads 6938771 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry
Authors: S. McLean, J. A. Scott
Abstract:
The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.Keywords: environment, heat recovery, mining engineering, sustainability
Procedia PDF Downloads 1118770 Creating Energy Sustainability in an Enterprise
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure
Procedia PDF Downloads 1118769 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 3188768 Finite Element Analysis of Hollow Structural Shape (HSS) Steel Brace with Infill Reinforcement under Cyclic Loading
Authors: Chui-Hsin Chen, Yu-Ting Chen
Abstract:
Special concentrically braced frames is one of the seismic load resisting systems, which dissipates seismic energy when bracing members within the frames undergo yielding and buckling while sustaining their axial tension and compression load capacities. Most of the inelastic deformation of a buckling bracing member concentrates in the mid-length region. While experiencing cyclic loading, the region dissipates most of the seismic energy being input into the frame. Such a concentration makes the braces vulnerable to failure modes associated with low-cycle fatigue. In this research, a strategy to improve the cyclic behavior of the conventional steel bracing member is proposed by filling the Hollow Structural Shape (HSS) member with reinforcement. It prevents the local section from concentrating large plastic deformation caused by cyclic loading. The infill helps spread over the plastic hinge region into a wider area hence postpone the initiation of local buckling or even the rupture of the braces. The finite element method is introduced to simulate the complicated bracing member behavior and member-versus-infill interaction under cyclic loading. Fifteen 3-D-element-based models are built by ABAQUS software. The verification of the FEM model is done with unreinforced (UR) HSS bracing members’ cyclic test data and aluminum honeycomb plates’ bending test data. Numerical models include UR and filled HSS bracing members with various compactness ratios based on the specification of AISC-2016 and AISC-1989. The primary variables to be investigated include the relative bending stiffness and the material of the filling reinforcement. The distributions of von Mises stress and equivalent plastic strain (PEEQ) are used as indices to tell the strengths and shortcomings of each model. The result indicates that the change of relative bending stiffness of the infill is much more influential than the change of material in use to increase the energy dissipation capacity. Strengthen the relative bending stiffness of the reinforcement results in additional energy dissipation capacity to the extent of 24% and 46% in model based on AISC-2016 (16-series) and AISC-1989 (89-series), respectively. HSS members with infill show growth in 𝜂Local Buckling, normalized energy cumulated until the happening of local buckling, comparing to UR bracing members. The 89-series infill-reinforced members have more energy dissipation capacity than unreinforced 16-series members by 117% to 166%. The flexural rigidity of infills should be less than 29% and 13% of the member section itself for 16-series and 89-series bracing members accordingly, thereby guaranteeing the spread over of the plastic hinge and the happening of it within the reinforced section. If the parameters are properly configured, the ductility, energy dissipation capacity, and fatigue-life of HSS SCBF bracing members can be improved prominently by the infill-reinforced method.Keywords: special concentrically braced frames, HSS, cyclic loading, infill reinforcement, finite element analysis, PEEQ
Procedia PDF Downloads 938767 Condition Monitoring for Controlling the Stability of the Rotating Machinery
Authors: A. Chellil, I. Gahlouz, S. Lecheb, A. Nour, S. Chellil, H. Mechakra, H. Kebir
Abstract:
In this paper, the experimental study for the instability of a separator rotor is presented, under dynamic loading response in the harmonic analysis condition. The analysis of the stress which operates the rotor is done. Calculations of different energies and the virtual work of the aerodynamic loads from the rotor are developed. Numerical calculations on the model develop of three dimensions prove that the defects effect has a negative effect on the stability of the rotor. Experimentally, the study of the rotor in the transient system allowed to determine the vibratory responses due to the unbalances and various excitations.Keywords: rotor, frequency, finite element, specter
Procedia PDF Downloads 3828766 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator
Authors: Swati Swati, Yuhang Chen, Robert Reuben
Abstract:
Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement
Procedia PDF Downloads 2798765 Bayesian Networks Scoping the Climate Change Impact on Winter Wheat Freezing Injury Disasters in Hebei Province, China
Authors: Xiping Wang,Shuran Yao, Liqin Dai
Abstract:
Many studies report the winter is getting warmer and the minimum air temperature is obviously rising as the important climate warming evidences. The exacerbated air temperature fluctuation tending to bring more severe weather variation is another important consequence of recent climate change which induced more disasters to crop growth in quite a certain regions. Hebei Province is an important winter wheat growing province in North of China that recently endures more winter freezing injury influencing the local winter wheat crop management. A winter wheat freezing injury assessment Bayesian Network framework was established for the objectives of estimating, assessing and predicting winter wheat freezing disasters in Hebei Province. In this framework, the freezing disasters was classified as three severity degrees (SI) among all the three types of freezing, i.e., freezing caused by severe cold in anytime in the winter, long extremely cold duration in the winter and freeze-after-thaw in early season after winter. The factors influencing winter wheat freezing SI include time of freezing occurrence, growth status of seedlings, soil moisture, winter wheat variety, the longitude of target region and, the most variable climate factors. The climate factors included in this framework are daily mean and range of air temperature, extreme minimum temperature and number of days during a severe cold weather process, the number of days with the temperature lower than the critical temperature values, accumulated negative temperature in a potential freezing event. The Bayesian Network model was evaluated using actual weather data and crop records at selected sites in Hebei Province using real data. With the multi-stage influences from the various factors, the forecast and assessment of the event-based target variables, freezing injury occurrence and its damage to winter wheat production, were shown better scoped by Bayesian Network model.Keywords: bayesian networks, climatic change, freezing Injury, winter wheat
Procedia PDF Downloads 4088764 Mitigation of Lithium-ion Battery Thermal Runaway Propagation Through the Use of Phase Change Materials Containing Expanded Graphite
Authors: Jayson Cheyne, David Butler, Iain Bomphray
Abstract:
In recent years, lithium-ion batteries have been used increasingly for electric vehicles and large energy storage systems due to their high-power density and long lifespan. Despite this, thermal runaway remains a significant safety problem because of its uncontrollable and irreversible nature - which can lead to fires and explosions. In large-scale lithium-ion packs and modules, thermal runaway propagation between cells can escalate fire hazards and cause significant damage. Thus, safety measures are required to mitigate thermal runaway propagation. The current research explores composite phase change materials (PCM) containing expanded graphite (EG) for thermal runaway mitigation. PCMs are an area of significant interest for battery thermal management due to their ability to absorb substantial quantities of heat during phase change. Moreover, the introduction of EG can support heat transfer from the cells to the PCM (owing to its high thermal conductivity) and provide shape stability to the PCM during phase change. During the research, a thermal model was established for an array of 16 cylindrical cells to simulate heat dissipation with and without the composite PCM. Two conditions were modeled, including the behavior during charge/discharge cycles (i.e., throughout regular operation) and thermal runaway. Furthermore, parameters including cell spacing, composite PCM thickness, and EG weight percentage (WT%) were varied to establish the optimal material parameters for enabling thermal runaway mitigation and effective thermal management. Although numerical modeling is still ongoing, initial findings suggest that a 3mm PCM containing 15WT% EG can effectively suppress thermal runaway propagation while maintaining shape stability. The next step in the research is to validate the model through controlled experimental tests. Additionally, with the perceived fire safety concerns relating to PCM materials, fire safety tests, including UL-94 and Limiting Oxygen Index (LOI), shall be conducted to explore the flammability risk.Keywords: battery safety, electric vehicles, phase change materials, thermal management, thermal runaway
Procedia PDF Downloads 1458763 Supporting Women's Economic Development in Rural Papua New Guinea
Authors: Katja Mikhailovich, Barbara Pamphilon
Abstract:
Farmer training in Papua New Guinea has focused mainly on technology transfer approaches. This has primarily benefited men and often excluded women whose literacy, low education and role in subsistence crops has precluded participation in formal training. The paper discusses an approach that uses both a brokerage model of agricultural extension to link smallholders with private sector agencies and an innovative family team’s approach that aims to support the economic empowerment of women in families and encourages sustainable and gender equitable farming and business practices.Keywords: women, economic development, agriculture, training
Procedia PDF Downloads 3918762 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 1058761 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues
Authors: Athena Daniilidou, Maria Platsidou
Abstract:
Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.Keywords: resilience, protective factors, teachers, item response theory
Procedia PDF Downloads 1008760 Simulation of Channel Models for Device-to-Device Application of 5G Urban Microcell Scenario
Authors: H. Zormati, J. Chebil, J. Bel Hadj Tahar
Abstract:
Next generation wireless transmission technology (5G) is expected to support the development of channel models for higher frequency bands, so clarification of high frequency bands is the most important issue in radio propagation research for 5G, multiple urban microcellular measurements have been carried out at 60 GHz. In this paper, the collected data is uniformly analyzed with focus on the path loss (PL), the objective is to compare simulation results of some studied channel models with the purpose of testing the performance of each one.Keywords: 5G, channel model, 60GHz channel, millimeter-wave, urban microcell
Procedia PDF Downloads 3198759 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 1448758 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers
Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec
Abstract:
Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser
Procedia PDF Downloads 3218757 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 2098756 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 1228755 The MoEDAL-MAPP* Experiment - Expanding the Discovery Horizon of the Large Hadron Collider
Authors: James Pinfold
Abstract:
The MoEDAL (Monopole and Exotics Detector at the LHC) experiment deployed at IP8 on the Large Hadron Collider ring was the first dedicated search experiment to take data at the Large Hadron Collider (LHC) in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive, slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. In more detail, we will report our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions. The MoEDAL detector, originally the first dedicated search detector at the LHC, is being reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs with enhanced instantaneous luminosity, detector efficiency and a factor of ten lower thresholds for HIPs. As part of this effort, we will search for massive l long-lived, singly and multiply charged particles from various scenarios for which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, In conjunction with MoEDAL’s trapping detector, the MAPP detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. The addition of an Outrigger detector for MAPP-1 to increase its acceptance for more massive milli-charged particles is currently in the Technical Proposal stage. Additionally, we will briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). This experiment phase is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model. We envisage this detector being deployed in the UGC1 gallery near IP8.Keywords: LHC, beyond the standard model, dedicated search experiment, highly ionizing particles, long-lived particles, milli-charged particles
Procedia PDF Downloads 688754 Numerical Study on the Static Characteristics of Novel Aerostatic Thrust Bearings Possessing Elastomer Capillary Restrictor and Bearing Surface
Authors: S. W. Lo, S.-H. Lu, Y. H. Guo, L. C. Hsu
Abstract:
In this paper, a novel design of aerostatic thrust bearing is proposed and is analyzed numerically. The capillary restrictor and bearing disk are made of elastomer like silicone and PU. The viscoelasticity of elastomer helps the capillary expand for more air flux and at the same time, allows conicity of the bearing surface to form when the air pressure is enhanced. Therefore, the bearing has the better ability of passive compensation. In the present example, as compared with the typical model, the new designs can nearly double the load capability and offer four times static stiffness.Keywords: aerostatic, bearing, elastomer, static stiffness
Procedia PDF Downloads 3778753 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 1498752 Simulation of Reflectometry in Alborz Tokamak
Authors: S. Kohestani, R. Amrollahi, P. Daryabor
Abstract:
Microwave diagnostics such as reflectometry are receiving growing attention in magnetic confinement fusionresearch. In order to obtain the better understanding of plasma confinement physics, more detailed measurements on density profile and its fluctuations might be required. A 2D full-wave simulation of ordinary mode propagation has been written in an effort to model effects seen in reflectometry experiment. The code uses the finite-difference-time-domain method with a perfectly-matched-layer absorption boundary to solve Maxwell’s equations.The code has been used to simulate the reflectometer measurement in Alborz Tokamak.Keywords: reflectometry, simulation, ordinary mode, tokamak
Procedia PDF Downloads 4208751 Microbial Effects of Iron Elution from Hematite into Seawater Mediated via Dissolved Organic Matter
Authors: Apichaya Aneksampant, Xuefei Tu, Masami Fukushima, Mitsuo Yamamoto
Abstract:
The restoration of seaweed beds recovery has been developed using a fertilization technique for supplying dissolved iron to barren coastal areas. The fertilizer is composed of iron oxides as a source of iron and compost as humic substance (HS) source, which can serve as chelator of iron to stabilize the dissolved species under oxic seawater condition. However, elution mechanisms of iron from iron oxide surfaces have not sufficiently elucidated. In particular, roles of microbial activities in the elution of iron from the fertilizer are not sufficiently understood. In the present study, a fertilizer (iron oxide/compost = 1/1, v/v) was incubated in a water tank at Mashike coast, Hokkaido Japan. Microorganisms in the 6-month fertilizer were isolated and identified as Exiguobacterium oxidotolerans sp. (T-2-2). The identified bacteria were inoculated to perform iron elution test in a postgate B medium, prepared in artificial seawater. Hematite was used as a model iron oxide and anthraquinone-2,7-disolfonate (AQDS) as a model for HSs. The elution test performed in presence and absence of bacteria inoculation. ICP-AES was used to analyze total iron and a colorimetric technique using ferrozine employed for the determination of ferrous ion. During the incubation period, sample contained hematite and T-2-2 in both presence and absence of AQDS continuously showed the iron elution and reached at the highest concentration after 9 days of incubation and then slightly decrease to stabilize within 20 days. Comparison to the sample without T-2-2, trace amount of iron was observed, suggesting that iron elution to seawater can be attributed to bacterial activities. The levels of total organic carbon (TOC) in the culture solution with hematite decreased. This may be to the adsorption of organic compound, AQDS, to hematite surfaces. The decrease in UV-vis absorption of AQDS in the culture solution also support the results of TOC that AQDS was adsorbed to hematite surfaces. AQDS can enhance the iron elution, while the adsorption of organic matter suppresses the iron elution from hematite.Keywords: anthraquinone-2, 7-disolfonate, barren ground, E.oxidotolerans sp., hematite, humic substances, iron elution
Procedia PDF Downloads 3798750 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 2068749 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System
Authors: Nicolas M. Beleski, Gustavo A. G. Lugo
Abstract:
Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind
Procedia PDF Downloads 1328748 Isolation and Transplantation of Hepatocytes in an Experimental Model
Authors: Inas Raafat, Azza El Bassiouny, Waldemar L. Olszewsky, Nagui E. Mikhail, Mona Nossier, Nora E. I. El-Bassiouni, Mona Zoheiry, Houda Abou Taleb, Noha Abd El-Aal, Ali Baioumy, Shimaa Attia
Abstract:
Background: Orthotopic liver transplantation is an established treatment for patients with severe acute and end-stage chronic liver disease. The shortage of donor organs continues to be the rate-limiting factor for liver transplantation throughout the world. Hepatocyte transplantation is a promising treatment for several liver diseases and can, also, be used as a "bridge" to liver transplantation in cases of liver failure. Aim of the work: This study was designed to develop a highly efficient protocol for isolation and transplantation of hepatocytes in experimental Lewis rat model to provide satisfactory guidelines for future application on humans.Materials and Methods: Hepatocytes were isolated from the liver by double perfusion technique and bone marrow cells were isolated by centrifugation of shafts of tibia and femur of donor Lewis rats. Recipient rats were subjected to sub-lethal dose of irradiation 2 days before transplantation. In a laparotomy operation the spleen was injected by freshly isolated hepatocytes and bone marrow cells were injected intravenously. The animals were sacrificed 45 day latter and splenic sections were prepared and stained with H & E, PAS AFP and Prox1. Results: The data obtained from this study showed that the double perfusion technique is successful in separation of hepatocytes regarding cell number and viability. Also the method used for bone marrow cells separation gave excellent results regarding cell number and viability. Intrasplenic engraftment of hepatocytes and live tissue formation within the splenic tissue were found in 70% of cases. Hematoxylin and eosin stained splenic sections from 7 rats showed sheets and clusters of cells among the splenic tissues. Periodic Acid Schiff stained splenic sections from 7 rats showed clusters of hepatocytes with intensely stained pink cytoplasmic granules denoting the presence of glycogen. Splenic sections from 7 rats stained with anti-α-fetoprotein antibody showed brownish cytoplasmic staining of the hepatocytes denoting positive expression of AFP. Splenic sections from 7 rats stained with anti-Prox1 showed brownish nuclear staining of the hepatocytes denoting positive expression of Prox1 gene on these cells. Also, positive expression of Prox1 gene was detected on lymphocytes aggregations in the spleens. Conclusions: Isolation of liver cells by double perfusion technique using collagenase buffer is a reliable method that has a very satisfactory yield regarding cell number and viability. The intrasplenic route of transplantation of the freshly isolated liver cells in an immunocompromised model was found to give good results regarding cell engraftment and tissue formation. Further studies are needed to assess function of engrafted hepatocytes by measuring prothrombin time, serum albumin and bilirubin levels.Keywords: Lewis rats, hepatocytes, BMCs, transplantation, AFP, Prox1
Procedia PDF Downloads 3178747 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials
Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita
Abstract:
Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides
Procedia PDF Downloads 2268746 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction
Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian
Abstract:
Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.Keywords: acute MI, mortality, heart failure, arrhythmia
Procedia PDF Downloads 1228745 Predicting College Students’ Happiness During COVID-19 Pandemic; Be optimistic and Well in College!
Authors: Michiko Iwasaki, Jane M. Endres, Julia Y. Richards, Andrew Futterman
Abstract:
The present study aimed to examine college students’ happiness during COVID19-pandemic. Using the online survey data from 96 college students in the U.S., a regression analysis was conducted to predict college students’ happiness. The results indicated that a four-predictor model (optimism, college students’ subjective wellbeing, coronavirus stress, and spirituality) explained 57.9% of the variance in student’s subjective happiness, F(4,77)=26.428, p<.001, R2=.579, 95% CI [.41,.66]. The study suggests the importance of learned optimism among college students.Keywords: COVID-19, optimism, spirituality, well-being
Procedia PDF Downloads 226