Search results for: features comparison
2520 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices
Authors: Mohammed Saleh Rezk, Reza Kashani
Abstract:
Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.Keywords: viscoelastic, damper, distributed damping, tuned mass damper
Procedia PDF Downloads 1112519 Computer-Aided Depression Screening: A Literature Review on Optimal Methodologies for Mental Health Screening
Authors: Michelle Nighswander
Abstract:
Suicide can be a tragic response to mental illness. It is difficult for people to disclose or discuss suicidal impulses. The stigma surrounding mental health can create a reluctance to seek help for mental illness. Patients may feel pressure to exhibit a socially desirable demeanor rather than reveal these issues, especially if they sense their healthcare provider is pressed for time or does not have an extensive history with their provider. Overcoming these barriers can be challenging. Although there are several validated depression and suicide risk instruments, varying processes used to administer these tools may impact the truthfulness of the responses. A literature review was conducted to find evidence of the impact of the environment on the accuracy of depression screening. Many investigations do not describe the environment and fewer studies use a comparison design. However, three studies demonstrated that computerized self-reporting might be more likely to elicit truthful and accurate responses due to increased privacy when responding compared to a face-to-face interview. These studies showed patients reported positive reactions to computerized screening for other stigmatizing health conditions such as alcohol use during pregnancy. Computerized self-screening for depression offers the possibility of more privacy and patient reflection, which could then send a targeted message of risk to the healthcare provider. This could potentially increase the accuracy while also increasing time efficiency for the clinic. Considering the persistent effects of mental health stigma, how these screening questions are posed can impact patients’ responses. This literature review analyzes trends in depression screening methodologies, the impact of setting on the results and how this may assist in overcoming one barrier caused by stigma.Keywords: computerized self-report, depression, mental health stigma, suicide risk
Procedia PDF Downloads 1342518 2D Ferromagnetism in Van der Waals Bonded Fe₃GeTe₂
Authors: Ankita Tiwari, Jyoti Saini, Subhasis Ghosh
Abstract:
For many years, researchers have been fascinated by the subject of how properties evolve as dimensionality is lowered. Early on, it was shown that the presence of a significant magnetic anisotropy might compensate for the lack of long-range (LR) magnetic order in a low-dimensional system (d < 3) with continuous symmetry, as proposed by Hohenberg-Mermin and Wagner (HMW). Strong magnetic anisotropy allows an LR magnetic order to stabilize in two dimensions (2D) even in the presence of stronger thermal fluctuations which is responsible for the absence of Heisenberg ferromagnetism in 2D. Van der Waals (vdW) ferromagnets, including CrI₃, CrTe₂, Cr₂X₂Te₆ (X = Si and Ge) and Fe₃GeTe₂, offer a nearly ideal platform for studying ferromagnetism in 2D. Fe₃GeTe₂ is the subject of extensive investigation due to its tunable magnetic properties, high Curie temperature (Tc ~ 220K), and perpendicular magnetic anisotropy. Many applications in the field of spintronics device development have been quite active due to these appealing features of Fe₃GeTe₂. Although it is known that LR-driven ferromagnetism is necessary to get around the HMW theorem in 2D experimental realization, Heisenberg 2D ferromagnetism remains elusive in condensed matter systems. Here, we show that Fe₃GeTe₂ hosts both localized and delocalized spins, resulting in itinerant and local-moment ferromagnetism. The presence of LR itinerant interaction facilitates to stabilize Heisenberg ferromagnet in 2D. With the help of Rhodes-Wohlfarth (RW) and generalized RW-based analysis, Fe₃GeTe₂ has been shown to be a 2D ferromagnet with itinerant magnetism that can be modulated by an external magnetic field. Hence, the presence of both local moment and itinerant magnetism has made this system interesting in terms of research in low dimensions. We have also rigorously performed critical analysis using an improvised method. We show that the variable critical exponents are typical signatures of 2D ferromagnetism in Fe₃GeTe₂. The spontaneous magnetization exponent β changes the universality class from mean-field to 2D Heisenberg with field. We have also confirmed the range of interaction via the renormalization group (RG) theory. According to RG theory, Fe₃GeTe₂ is a 2D ferromagnet with LR interactions.Keywords: Van der Waal ferromagnet, 2D ferromagnetism, phase transition, itinerant ferromagnetism, long range order
Procedia PDF Downloads 772517 A Psychosocial Impact of the Covid-19 Pandemic Among Frontline Workers and General Populations in Kathmandu
Authors: Nabin Prasad Joshi
Abstract:
A new variant of the coronavirus family found in the Wuhan city market of China is causing serious harm to human beings. After the WHO decided COVID-19 was a pandemic situation, everyone started to measure the prevention of infectious diseases according to WHO guidelines. It includes social distancing, isolation, quarantine, lockdown, sanitation, and masking, respectively. During this time, the researcher has observed the difficulties of cultivating the new normal in people in Nepal. People have perceived the single coronavirus differently; common populations and frontline workers have different perceptions of coronavirus. The researcher started to measure the psychosocial impact of the COVID-19 pandemic on frontline workers and general populations in Kathmandu valley. The total number of sample units for this research is 82; it includes 52 general populations and 30 frontline workers. These sample units are selected through convenient sampling and purposive sampling, respectively. This research is based on descriptive and exploratory design. DASS-21 of the Nepali version is a comprehensive data collection tool for depression, anxiety, and stress measurement in this research, and simultaneously the psychosocial checklist, key-informant interview, and case study have been done. Quantitative data are analyzed with the help of excel, and qualitative data are through thematic analysis. The study has shown that the occurrence of psychosocial issues among frontline workers is greater than in general populations. It is found that the informants with higher education status have greater psychosocial issues in comparison to low education status. In the context of a pandemic, family/friends’ support can function as a protective factor when at adequate levels.Keywords: anxiety, depression, isolation, lockdown
Procedia PDF Downloads 822516 Anti-Phosphorylcholine T Cell Dependent Antibody
Authors: M. M. Rahman, A. Liu, A. Frostegard, J. Frostegard
Abstract:
The human immune system plays an essential role in cardiovascular disease (CVD) and atherosclerosis. Our earlier studies showed that major immunocompetent cells including T cells are activated by phosphorylcholine epitope. Further, we have determined for the first time in a clinical cohort that antibodies against phosphorylcholine (anti-PC) are negatively and independently associated with the development of atherosclerosis and thus a low risk of cardiovascular diseases. It is still unknown whether activated T cells play a role in anti-PC production. Here we aim to clarify the role of T cells in anti-PC production. B cell alone, or with CD3 T, CD4 T or with CD8 T cells were cultured in polystyrene plates to examine anti-PC IgM production. In addition to mixed B cell with CD3 T cell culture, B cells with CD3 T cells were also cultured in transwell co-culture plates. Further, B cells alone and mixed B cell with CD3 T cell cultures with or without anti-HLA 2 antibody were cultured for 6 days. Anti-PC IgM was detected by ELISA in independent experiments. More than 8 fold higher levels of anti-PC IgM were detected by ELISA in mixed B cell with CD3 T cell cultures in comparison to B cells alone. After the co-culture of B and CD3 T cells in transwell plates, there were no increased antibody levels indicating that B and T cells need to interact to augment anti-PC IgM production. Furthermore, anti-PC IgM was abolished by anti-HLA 2 blocking antibody in mixed B and CD3 T cells culture. In addition, the lack of increased anti-PC IgM in mixed B with CD8 T cells culture and the increased levels of anti-PC in mixed B with CD4 T cells culture support the role of helper T cell for the anti-PC IgM production. Atherosclerosis is a major cause of cardiovascular diseases, but anti-PC IgM is a protection marker for atherosclerosis development. Understanding the mechanism involved in the anti-PC IgM regulation could play an important role in strategies to raise anti-PC IgM. Studies suggest that anti-PC is T-cell independent antibody, but our study shows the major role of T cell in anti-PC IgM production. Activation of helper T cells by immunization could be a possible mechanism for raising anti-PC levels.Keywords: anti-PC, atherosclerosis, aardiovascular diseases, phosphorylcholine
Procedia PDF Downloads 3452515 An Analysis on Aid for Migrants: A Descriptive Analysis on Official Development Assistance During the Migration Crisis
Authors: Elena Masi, Adolfo Morrone
Abstract:
Migration has recently become a mainstream development sector and is currently at the forefront in institutional and civil society context. However, no consensus exists on how the link between migration and development operates, that is how development is related to migration and how migration can promote development. On one hand, Official Development Assistance is recognized to be one of the levers to development. On the other hand, the debate is focusing on what should be the scope of aid programs targeting migrants groups and in general the migration process. This paper provides a descriptive analysis on how development aid for migration was allocated in the recent past, focusing on the actions that were funded and implemented by the international donor community. In the absence of an internationally shared methodology for defining the boundaries of development aid on migration, the analysis based on lexical hypotheses on the title or on the short description of initiatives funded by several Organization for Economic Co-operation and Development (OECD) countries. Moreover, the research describes and quantifies aid flows for each country according to different criteria. The terms migrant and refugee are used to identify the projects in accordance with the most internationally agreed definitions and only actions in countries of transit or of origin are considered eligible, thus excluding the amount sustained for refugees in donor countries. The results show that the percentage of projects targeting migrants, in terms of amount, has followed a growing trend from 2009 to 2016 in several European countries, and is positively correlated with the flows of migrants. Distinguishing between programs targeting migrants and programs targeting refugees, some specific national features emerge more clearly. A focus is devoted to actions targeting the root causes of migration, showing an inter-sectoral approach in international aid allocation. The analysis gives some tentative solutions to the lack of consensus on language on migration and development aid, and emphasizes the need to internationally agree on a criterion for identifying programs targeting both migrants and refugees, to make action more transparent and in order to develop effective strategies at the global level.Keywords: migration, official development assistance, ODA, refugees, time series
Procedia PDF Downloads 1352514 Enhancing Single Channel Minimum Quantity Lubrication through Bypass Controlled Design for Deep Hole Drilling with Small Diameter Tool
Authors: Yongrong Li, Ralf Domroes
Abstract:
Due to significant energy savings, enablement of higher machining speed as well as environmentally friendly features, Minimum Quantity Lubrication (MQL) has been used for many machining processes efficiently. However, in the deep hole drilling field (small tool diameter D < 5 mm) and long tool (length L > 25xD) it is always a bottle neck for a single channel MQL system. The single channel MQL, based on the Venturi principle, faces a lack of enough oil quantity caused by dropped pressure difference during the deep hole drilling process. In this paper, a system concept based on a bypass design has explored its possibility to dynamically reach the required pressure difference between the air inlet and the inside of aerosol generator, so that the deep hole drilling demanded volume of oil can be generated and delivered to tool tips. The system concept has been investigated in static and dynamic laboratory testing. In the static test, the oil volume with and without bypass control were measured. This shows an oil quantity increasing potential up to 1000%. A spray pattern test has demonstrated the differences of aerosol particle size, aerosol distribution and reaction time between single channel and bypass controlled single channel MQL systems. A dynamic trial machining test of deep hole drilling (drill tool D=4.5mm, L= 40xD) has been carried out with the proposed system on a difficult machining material AlSi7Mg. The tool wear along a 100 meter drilling was tracked and analyzed. The result shows that the single channel MQL with a bypass control can overcome the limitation and enhance deep hole drilling with a small tool. The optimized combination of inlet air pressure and bypass control results in a high quality oil delivery to tool tips with a uniform and continuous aerosol flow.Keywords: deep hole drilling, green production, Minimum Quantity Lubrication (MQL), near dry machining
Procedia PDF Downloads 2092513 Bone Mineral Density in Long-Living Patients with Coronary Artery Disease
Authors: Svetlana V. Topolyanskaya, Tatyana A. Eliseeva, Olga N. Vakulenko, Leonid I. Dvoretski
Abstract:
Introduction: Limited data are available on osteoporosis in centenarians. Therefore, we evaluated bone mineral density in long-living patients with coronary artery disease (CAD). Methods: 202 patients hospitalized with CAD were enrolled in this cross-sectional study. The patients' age ranged from 90 to 101 years. The majority of study participants (64.4%) were women. The main exclusion criteria were any disease or medication that can lead to secondary osteoporosis. Bone mineral density (BMD) was measured by dual-energy X-ray absorptiometry. Results: Normal lumbar spine BMD was observed in 40.9%, osteoporosis – in 26.9%, osteopenia – in 32.2% of patients. Normal proximal femur BMD values were observed in 21.3%, osteoporosis – in 39.9%, and osteopenia – in 38.8% of patients. Normal femoral neck BMD was registered only in 10.4% of patients, osteoporosis was observed in 60.4%, osteopenia in 29.2%. Significant positive correlation was found between all BMD values and body mass index of patients (p < 0.001). Positive correlation was registered between BMD values and serum uric acid (p=0.0005). The likelihood of normal BMD values with hyperuricemia increased 3.8 times, compared to patients with normal uric acid, who often have osteoporosis (Odds Ratio=3.84; p = 0.009). Positive correlation was registered between all BMD values and body mass index (p < 0.001). Positive correlation between triglycerides levels and T-score (p=0.02), but negative correlation between BMD and HDL-cholesterol (p=0.02) were revealed. Negative correlation between frailty severity and BMD values (p=0.01) was found. Positive correlation between BMD values and functional abilities of patients assessed using Barthel index (r=0,44; p=0,000002) and IADL scale (r=0,36; p=0,00008) was registered. Fractures in history were observed in 27.6% of patients. Conclusions: The study results indicate some features of BMD in long-livers. In the study group, significant relationships were found between bone mineral density on the one hand, and patients' functional abilities on the other. It is advisable to further study the state of bone tissue in long-livers involving a large sample of patients.Keywords: osteoporosis, bone mineral density, centenarians, coronary artery disease
Procedia PDF Downloads 1482512 Psychometric Properties of the Eq-5d-3l and Eq-5d-5l Instruments for Health Related Quality of Life Measurement in Indonesian Population
Authors: Dwi Endarti, Susi a Kristina, Rizki Noorizzati, Akbar E Nugraha, Fera Maharani, Kika a Putri, Asninda H Azizah, Sausanzahra Angganisaputri, Yunisa Yustikarini
Abstract:
Cost utility analysis is the most recommended pharmacoeconomic method since it allows widely comparison of cost-effectiveness results from different interventions. The method uses outcome of quality-adjusted life year (QALY) or disability-adjusted life year (DALY). Measurement of QALY requires the data of utility dan life years gained. Utility is measured with the instrument for quality of life measurement such as EQ-5D. Recently, the EQ-5D is available in two versions which are EQ-5D-3L and EQ-5D-5L. This study aimed to compare the EQ-5D-3L and EQ-5D-5L to examine the most suitable version for Indonesian population. This study was an observational study employing cross sectional approach. Data of quality of life measured with EQ-5D-3L and EQ-5D-5L were collected from several groups of population which were respondent with chronic diseases, respondent with acute diseases, and respondent from general population (without illness) in Yogyakarta Municipality, Indonesia. Convenience samples of hypertension patients (83), diabetes mellitus patients (80), and osteoarthritis patients (47), acute respiratory tract infection (81), cephalgia (43), dyspepsia (42), and respondent from general population (293) were recruited in this study. Responses on the 3L and 5L versions of EQ-5D were compared by examining the psychometric properties including agreement, internal consistency, ceiling effect, and convergent validity. Based on psychometric properties tests of EQ-5D-3L dan EQ-5D-5L, EQ-5D-5L tended to have better psychometric properties compared to EQ-5D-3L. Future studies for health related quality of life (HRQOL) measurements for pharmacoeconomic studies in Indonesia should apply EQ-5D-5L.Keywords: EQ-5D, Health Related Quality of Life, Indonesian Population, Psychometric Properties
Procedia PDF Downloads 4832511 Investigating the Dynamic Plantar Pressure Distribution in Individuals with Multiple Sclerosis
Authors: Hilal Keklicek, Baris Cetin, Yeliz Salci, Ayla Fil, Umut Altinkaynak, Kadriye Armutlu
Abstract:
Objectives and Goals: Spasticity is a common symptom characterized with a velocity dependent increase in tonic stretch reflexes (muscle tone) in patient with multiple sclerosis (MS). Hypertonic muscles affect the normal plantigrade contact by disturbing accommodation of foot to the ground while walking. It is important to know the differences between healthy and neurologic foot features for management of spasticity related deformities and/or determination of rehabilitation purposes and contents. This study was planned with the aim of investigating the dynamic plantar pressure distribution in individuals with MS and determining the differences between healthy individuals (HI). Methods: Fifty-five individuals with MS (108 foot with spasticity according to Modified Ashworth Scale) and 20 HI (40 foot) were the participants of the study. The dynamic pedobarograph was utilized for evaluation of dynamic loading parameters. Participants were informed to walk at their self-selected speed for seven times to eliminate learning effect. The parameters were divided into 2 categories including; maximum loading pressure (N/cm2) and time of maximum pressure (ms) were collected from heal medial, heal lateral, mid foot, heads of first, second, third, fourth and fifth metatarsal bones. Results: There were differences between the groups in maximum loading pressure of heal medial (p < .001), heal lateral (p < .001), midfoot (p=.041) and 5th metatarsal areas (p=.036). Also, there were differences between the groups the time of maximum pressure of all metatarsal areas, midfoot, heal medial and heal lateral (p < .001) in favor of HI. Conclusions: The study provided basic data about foot pressure distribution in individuals with MS. Results of the study primarily showed that spasticity of lower extremity muscle disrupted the posteromedial foot loading. Secondarily, according to the study result, spasticity lead to inappropriate timing during load transfer from hind foot to forefoot.Keywords: multiple sclerosis, plantar pressure distribution, gait, norm values
Procedia PDF Downloads 3222510 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 5142509 Feasibility Study of Plant Design with Biomass Direct Chemical Looping Combustion for Power Generation
Authors: Reza Tirsadi Librawan, Tara Vergita Rakhma
Abstract:
The increasing demand for energy and concern of global warming are intertwined issues of critical importance. With the pressing needs of clean, efficient and cost-effective energy conversion processes, an alternative clean energy source is needed. Biomass is one of the preferable options because it is clean and renewable. The efficiency for biomass conversion is constrained by the relatively low energy density and high moisture content from biomass. This study based on bio-based resources presents the Biomass Direct Chemical Looping Combustion Process (BDCLC), an alternative process that has a potential to convert biomass in thermal cracking to produce electricity and CO2. The BDCLC process using iron-based oxygen carriers has been developed as a biomass conversion process with in-situ CO2 capture. The BDCLC system cycles oxygen carriers between two reactor, a reducer reactor and combustor reactor in order to convert coal for electric power generation. The reducer reactor features a unique design: a gas-solid counter-current moving bed configuration to achieve the reduction of Fe2O3 particles to a mixture of Fe and FeO while converting the coal into CO2 and steam. The combustor reactor is a fluidized bed that oxidizes the reduced particles back to Fe2O3 with air. The oxidation of iron is an exothermic reaction and the heat can be recovered for electricity generation. The plant design’s objective is to obtain 5 MW of electricity with the design of the reactor in 900 °C, 2 ATM for the reducer and 1200 °C, 16 ATM for the combustor. We conduct process simulation and analysis to illustrate the individual reactor performance and the overall mass and energy management scheme of BDCLC process that developed by Aspen Plus software. Process simulation is then performed based on the reactor performance data obtained in multistage model.Keywords: biomass, CO2 capture, direct chemical looping combustion, power generation
Procedia PDF Downloads 5122508 A Bayesian Approach for Analyzing Academic Article Structure
Authors: Jia-Lien Hsu, Chiung-Wen Chang
Abstract:
Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach
Procedia PDF Downloads 3342507 Comparison of Clinical Profiles of Patients Seen in a Women and Children Protection Unit in a Local Government Hospital in Makati, Philippines Before and During the COVID-19 Pandemic Between January 2018 to February 2020 and March 2020 to December 2021
Authors: Margaret Denise P. Del Rosario, Geraldine Alcantara
Abstract:
Background: The declaration of the COVID-19 pandemic has impacted hospital visits of child abuse cases with less consults but more severe injuries. Objective: The study aims to identify the clinical profiles of patients seen in the hospital ng Makati Women and Children Protection Unit before and during the pandemic. Design: A cross-sectional analytic study design through review of records that underwent quantitative analysis. Results: 264 cases pre-pandemic and 208 cases during the pandemic were reviewed. Most reported cases were neglect comprising of 47% of the pre-pandemic cases and 68% of cases during the pandemic. Supervisory neglect was most commonly reported. An equal distribution between males and females were seen among victims and alleged perpetrators. The age group of both victims and alleged perpetrators during the pandemic was significantly younger compared to the pre-pandemic period. Children belonging to larger family groups were commonly encountered with most of them being the eldest amongst siblings. Alleged perpetrators were mostly secondary graduates for both time periods. A significant increase of cases during the pandemic occurred at home. More patients required hospitalization during the pandemic period with 37% compared to the 23% of admissions prior to the pandemic. Furthermore, a three-fold increase of injuries sustained during the pandemic required intensive care. Conclusion: The study reflects increased severity of injuries related to abuse during the pandemic compared to pre-pandemic times. A significant increase in injuries requiring intensive care were also seen despite less reported cases.Keywords: child abuse, COVID-19, violence against children, WCPU, neglect
Procedia PDF Downloads 582506 Approaching the Spatial Multi-Objective Land Use Planning Problems at Mountain Areas by a Hybrid Meta-Heuristic Optimization Technique
Authors: Konstantinos Tolidis
Abstract:
The mountains are amongst the most fragile environments in the world. The world’s mountain areas cover 24% of the Earth’s land surface and are home to 12% of the global population. A further 14% of the global population is estimated to live in the vicinity of their surrounding areas. As urbanization continues to increase in the world, the mountains are also key centers for recreation and tourism; their attraction is often heightened by their remarkably high levels of biodiversity. Due to the fact that the features in mountain areas vary spatially (development degree, human geography, socio-economic reality, relations of dependency and interaction with other areas-regions), the spatial planning on these areas consists of a crucial process for preserving the natural, cultural and human environment and consists of one of the major processes of an integrated spatial policy. This research has been focused on the spatial decision problem of land use allocation optimization which is an ordinary planning problem on the mountain areas. It is a matter of fact that such decisions must be made not only on what to do, how much to do, but also on where to do, adding a whole extra class of decision variables to the problem when combined with the consideration of spatial optimization. The utility of optimization as a normative tool for spatial problem is widely recognized. However, it is very difficult for planners to quantify the weights of the objectives especially when these are related to mountain areas. Furthermore, the land use allocation optimization problems at mountain areas must be addressed not only by taking into account the general development objectives but also the spatial objectives (e.g. compactness, compatibility and accessibility, etc). Therefore, the main research’s objective was to approach the land use allocation problem by utilizing a hybrid meta-heuristic optimization technique tailored to the mountain areas’ spatial characteristics. The results indicates that the proposed methodological approach is very promising and useful for both generating land use alternatives for further consideration in land use allocation decision-making and supporting spatial management plans at mountain areas.Keywords: multiobjective land use allocation, mountain areas, spatial planning, spatial decision making, meta-heuristic methods
Procedia PDF Downloads 3512505 Comparison of Radiation Dosage and Image Quality: Digital Breast Tomosynthesis vs. Full-Field Digital Mammography
Authors: Okhee Woo
Abstract:
Purpose: With increasing concern of individual radiation exposure doses, studies analyzing radiation dosage in breast imaging modalities are required. Aim of this study is to compare radiation dosage and image quality between digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM). Methods and Materials: 303 patients (mean age 52.1 years) who studied DBT and FFDM were retrospectively reviewed. Radiation dosage data were obtained by radiation dosage scoring and monitoring program: Radimetrics (Bayer HealthCare, Whippany, NJ). Entrance dose and mean glandular doses in each breast were obtained in both imaging modalities. To compare the image quality of DBT with two-dimensional synthesized mammogram (2DSM) and FFDM, 5-point scoring of lesion clarity was assessed and the better modality between the two was selected. Interobserver performance was compared with kappa values and diagnostic accuracy was compared using McNemar test. The parameters of radiation dosages (entrance dose, mean glandular dose) and image quality were compared between two modalities by using paired t-test and Wilcoxon rank sum test. Results: For entrance dose and mean glandular doses for each breasts, DBT had lower values compared with FFDM (p-value < 0.0001). Diagnostic accuracy did not have statistical difference, but lesion clarity score was higher in DBT with 2DSM and DBT was chosen as a better modality compared with FFDM. Conclusion: DBT showed lower radiation entrance dose and also lower mean glandular doses to both breasts compared with FFDM. Also, DBT with 2DSM had better image quality than FFDM with similar diagnostic accuracy, suggesting that DBT may have a potential to be performed as an alternative to FFDM.Keywords: radiation dose, DBT, digital mammography, image quality
Procedia PDF Downloads 3532504 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 762503 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes
Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi
Abstract:
One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.Keywords: Heavy Water Reactor, Burn up, Minor Actinides, Neutronic Calculation
Procedia PDF Downloads 2482502 Plasma Ion Implantation Study: A Comparison between Tungsten and Tantalum as Plasma Facing Components
Authors: Tahreem Yousaf, Michael P. Bradley, Jerzy A. Szpunar
Abstract:
Currently, nuclear fusion is considered one of the most favorable options for future energy generation, due both to its abundant fuel and lack of emissions. For fusion power reactors, a major problem will be a suitable material choice for the Plasma Facing Components (PFCs) which will constitute the reactor first wall. Tungsten (W) has advantages as a PFC material because of its high melting point, low vapour pressure, high thermal conductivity and low retention of hydrogen isotopes. However, several adverse effects such as embrittlement, melting and morphological evolution have been observed in W when it is bombarded by low-energy and high-fluence helium (He) and deuterium (D) ions, as a simulation conditions adjacent to a fusion plasma. Recently, tantalum (Ta) also investigate as PFC and show better reluctance to nanostructure fuzz as compared to W under simulated fusion plasma conditions. But retention of D ions found high in Ta than W. Preparatory to plasma-based ion implantation studies, the effect of D and He ion impact on W and Ta is predicted by using the stopping and range of ions in the matter (SRIM) code. SRIM provided some theoretical results regarding projected range, ion concentration (at. %) and displacement damage (dpa) in W and Ta. The projected range for W under Irradiation of He and D ions with an energy of 3-keV and 1×fluence is determined 75Å and 135 Å and for Ta 85Å and 155Å, respectively. For both W and Ta samples, the maximum implanted peak for helium is predicted ~ 5.3 at. % at 12 nm and for De ions concentration peak is located near 3.1 at. % at 25 nm. For the same parameters, the displacement damage for He ions is observed in W ~ 0.65 dpa and Ta ~ 0.35 dpa at 5 nm. For D ions the displacement damage for W ~ 0.20 dpa at 8 nm and Ta ~ 0.175 dpa at 7 nm. The mean implantation depth is same for W and Ta, i.e. for He ions ~ 40 nm and D ions ~ 70 nm. From these results, we conclude that retention of D is high than He ions, but damage is low for Ta as compared to W. Further investigation still in progress regarding W and T.Keywords: helium and deuterium ion impact, plasma facing components, SRIM simulation, tungsten, tantalum
Procedia PDF Downloads 1352501 The Suitability of Agile Practices in Healthcare Industry with Regard to Healthcare Regulations
Authors: Mahmood Alsaadi, Alexei Lisitsa
Abstract:
Nowadays, medical devices rely completely on software whether as whole software or as embedded software, therefore, the organization that develops medical device software can benefit from adopting agile practices. Using agile practices in healthcare software development industries would bring benefits such as producing a product of a high-quality with low cost and in short period. However, medical device software development companies faced challenges in adopting agile practices. These due to the gaps that exist between agile practices and the requirements of healthcare regulations such as documentation, traceability, and formality. This research paper will conduct a study to investigate the adoption rate of agile practice in medical device software development, and they will extract and outline the requirements of healthcare regulations such as Food and Drug Administration (FDA), Health Insurance Portability and Accountability Act (HIPAA), and Medical Device Directive (MDD) that affect directly or indirectly on software development life cycle. Moreover, this research paper will evaluate the suitability of using agile practices in healthcare industries by analyzing the most popular agile practices such as eXtream Programming (XP), Scrum, and Feature-Driven Development (FDD) from healthcare industry point of view and in comparison with the requirements of healthcare regulations. Finally, the authors propose an agile mixture model that consists of different practices from different agile methods. As result, the adoption rate of agile practices in healthcare industries still low and agile practices should enhance with regard to requirements of the healthcare regulations in order to be used in healthcare software development organizations. Therefore, the proposed agile mixture model may assist in minimizing the gaps existing between healthcare regulations and agile practices and increase the adoption rate in the healthcare industry. As this research paper part of the ongoing project, an evaluation of agile mixture model will be conducted in the near future.Keywords: adoption of agile, agile gaps, agile mixture model, agile practices, healthcare regulations
Procedia PDF Downloads 2392500 Rumination Time and Reticuloruminal Temperature around Calving in Eutocic and Dystocic Dairy Cows
Authors: Levente Kovács, Fruzsina Luca Kézér, Ottó Szenci
Abstract:
Prediction of the onset of calving and recognizing difficulties at calving has great importance in decreasing neonatal losses and reducing the risk of health problems in the early postpartum period. In this study, changes of rumination time, reticuloruminal pH and temperature were investigated in eutocic (EUT, n = 10) and dystocic (DYS, n = 8) dairy cows around parturition. Rumination time was continuously recorded using an acoustic biotelemetry system, whereas reticuloruminal pH and temperature were recorded using an indwelling and wireless data transmitting system. The recording period lasted from 3 d before calving until 7 days in milk. For the comparison of rumination time and reticuloruminal characteristics between groups, time to return to baseline (the time interval required to return to baseline from the delivery of the calf) and area under the curve (AUC, both for prepartum and postpartum periods) were calculated for each parameter. Rumination time decreased from baseline 28 h before calving both for EUT and DYS cows (P = 0.023 and P = 0.017, respectively). After 20 h before calving, it decreased onwards to reach 32.4 ± 2.3 and 13.2 ± 2.0 min/4 h between 8 and 4 h before delivery in EUT and DYS cows, respectively, and then it decreased below 10 and 5 min during the last 4 h before calving (P = 0.003 and P = 0.008, respectively). Until 12 h after delivery rumination time reached 42.6 ± 2.7 and 51.0 ± 3.1 min/4 h in DYS and EUT dams, respectively, however, AUC and time to return to baseline suggested lower rumination activity in DYS cows than in EUT dams for the 168-h postpartum observational period (P = 0.012 and P = 0.002, respectively). Reticuloruminal pH decreased from baseline 56 h before calving both for EUT and DYS cows (P = 0.012 and P = 0.016, respectively), but did not differ between groups before delivery. In DYS cows, reticuloruminal temperature decreased from baseline 32 h before calving by 0.23 ± 0.02 °C (P = 0.012), whereas in EUT cows such a decrease was found only 20 h before delivery (0.48 ± 0.05 °C, P < 0.01). AUC of reticuloruminal temperature calculated for the prepartum period was greater in EUT cows than in DYS cows (P = 0.042). During the first 4 h after calving, it decreased from 39.7 ± 0.1 to 39.00 ± 0.1 °C and from 39.8 ± 0.1 to 38.8 ± 0.1 °C in EUT and DYS cows, respectively (P < 0.01 for both groups) and reached baseline levels after 35.4 ± 3.4 and 37.8 ± 4.2 h after calving in EUT and DYS cows, respectively. Based on our results, continuous monitoring of changes in rumination time and reticuloruminal temperature seems to be promising in the early detection of cows with a higher risk of dystocia. Depressed postpartum rumination time of DYS cows highlights the importance of the monitoring of cows experiencing difficulties at calving.Keywords: reticuloruminal pH, reticuloruminal temperature, rumination time, dairy cows, dystocia
Procedia PDF Downloads 3182499 A Comparative Analysis of Clustering Approaches for Understanding Patterns in Health Insurance Uptake: Evidence from Sociodemographic Kenyan Data
Authors: Nelson Kimeli Kemboi Yego, Juma Kasozi, Joseph Nkruzinza, Francis Kipkogei
Abstract:
The study investigated the low uptake of health insurance in Kenya despite efforts to achieve universal health coverage through various health insurance schemes. Unsupervised machine learning techniques were employed to identify patterns in health insurance uptake based on sociodemographic factors among Kenyan households. The aim was to identify key demographic groups that are underinsured and to provide insights for the development of effective policies and outreach programs. Using the 2021 FinAccess Survey, the study clustered Kenyan households based on their health insurance uptake and sociodemographic features to reveal patterns in health insurance uptake across the country. The effectiveness of k-prototypes clustering, hierarchical clustering, and agglomerative hierarchical clustering in clustering based on sociodemographic factors was compared. The k-prototypes approach was found to be the most effective at uncovering distinct and well-separated clusters in the Kenyan sociodemographic data related to health insurance uptake based on silhouette, Calinski-Harabasz, Davies-Bouldin, and Rand indices. Hence, it was utilized in uncovering the patterns in uptake. The results of the analysis indicate that inclusivity in health insurance is greatly related to affordability. The findings suggest that targeted policy interventions and outreach programs are necessary to increase health insurance uptake in Kenya, with the ultimate goal of achieving universal health coverage. The study provides important insights for policymakers and stakeholders in the health insurance sector to address the low uptake of health insurance and to ensure that healthcare services are accessible and affordable to all Kenyans, regardless of their socio-demographic status. The study highlights the potential of unsupervised machine learning techniques to provide insights into complex health policy issues and improve decision-making in the health sector.Keywords: health insurance, unsupervised learning, clustering algorithms, machine learning
Procedia PDF Downloads 1482498 Ultrasound-Assisted Sol – Gel Synthesis of Nano-Boehmite for Biomedical Purposes
Authors: Olga Shapovalova, Vladimir Vinogradov
Abstract:
Among many different sol – gel matrices only alumina can be successfully parenteral injected in the human body. And this is not surprising, because boehmite (aluminium oxyhydroxide) is the metal oxide approved by FDA and EMA for intravenous and intramuscular administrations, and also has been using for a longtime as adjuvant for producing of many modern vaccines. In our earlier study, it has been shown, that denaturation temperature of enzymes entrapped in sol-gel boehmite matrix increases for 30 – 60 °С with preserving of initial activity. It makes such matrices more attractive for long-term storage of non-stable drugs. In current work we present ultrasound-assisted sol-gel synthesis of nano-boehmite. This method provides bio-friendly, very stable, highly homogeneous alumina sol with using only water and aluminium isopropoxide as a precursor. Many parameters of the synthesis were studied in details: time of ultrasound treatment, US frequency, surface area, pore and nanoparticle size, zeta potential and others. Here we investigated the dependence of stability of colloidal sols and textural properties of the final composites as a function of the time of ultrasonic treatment. Chosen ultrasonic treatment time was between 30 and 180 minutes. Surface area, average pore diameter and total pore volume of the final composites were measured by surface and pore size analyzer Nova 1200 Quntachrome. It was shown that the matrices with ultrasonic treatment time equal to 90 minutes have the biggest surface area 431 ± 24 m2/g. On the other had such matrices have a smaller stability in comparison with the samples with ultrasonic treatment time equal to 120 minutes that have the surface area 390 ± 21 m2/g. It was shown that the stable sols could be formed only after 120 minutes of ultrasonic treatment, otherwise the white precipitate of boehmite is formed. We conclude that the optimal ultrasonic treatment time is 120 minutes.Keywords: boehmite matrix, stabilisation, ultrasound-assisted sol-gel synthesis
Procedia PDF Downloads 2692497 A Study Problem and Needs Compare the Held of the Garment Industries in Nonthaburi and Bangkok Area
Authors: Thepnarintra Praphanphat
Abstract:
The purposes of this study were to investigate garment industry’s condition, problems, and need for assistance. The population of the study was 504 managers or managing directors of garment establishments finished apparel industrial manager and permission of the Department of Industrial Works 28, Ministry of Industry until January 1, 2012. In determining the sample size with the opening of the Taro Yamane finished at 95% confidence level is ± 5% deviation was 224 managers. Questionnaires were used to collect the data. Percentage, frequency, arithmetic mean, standard deviation, t-test, ANOVA, and LSD were used to analyze the data. It was found that most establishments were of a large size, operated in a form of limited company for more than 15 years most of which produced garments for working women. All investment was made by Thai people. The products were made to order and distributed domestically and internationally. The total sale of the year 2010, 2011, and 2012 was almost the same. With respect to the problems of operating the business, the study indicated, as a whole, by- aspects, and by-items, that they were at a high level. The comparison of the level of problems of operating garment business as classified by general condition showed that problems occurring in business of different sizes were, as a whole, not different. In taking aspects into consideration, it was found that the level of problem in relation to production was different; medium establishments had more problems in production than those of small and large sizes. According to the by-items analysis, five problems were found different; namely, problems concerning employees, machine maintenance, number of designers, and price competition. Such problems in the medium establishments were at a higher level than those in the small and large establishments. Regarding business age, the examination yielded no differences as a whole, by-aspects, and by-items. The statistical significance level of this study was set at .05.Keywords: garment industry, garment, fashion, competitive enhancement project
Procedia PDF Downloads 1912496 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education
Authors: Gülay Dalgıç, Gildis Tachir
Abstract:
Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.Keywords: education modeling, architecture education, design education, design process
Procedia PDF Downloads 1432495 Environmentally Friendly KOH and NH4OH-KOH Pulping of Rice Straw
Authors: Omid Ghaffarzadeh Mollabashi, Sara Khorshidi, Hossein Kermanian Seyed, Majid Zabihzadeh
Abstract:
The main problem that hinders the intensive use of non-wood raw materials in papermaking industry is the environmental pollution caused by black liquor. As a matter of fact, black liquor of nonwood pulping is discharged to the environment due to the lack of recovery. Traditionally, NaOH pulping produces Na-based black liquor that may increase soil erosion and reduce soil permeability. With substitution of KOH/NH4OH with NaOH as the cooking liquor, K and N can act as a soil fertilizer while offering an environmentally acceptable disposal alternative. For this purpose, rice straw samples were pulped under the following conditions; Constant factors were: straw weight: 100 gram (based on oven dry), liquor to straw ratio 7:1 and maximum temperature, 170 and 180 ºC. Variable factors for KOH cooks were: KOH dosage of 14, 17 and %20 on oven dry of straw and times at maximum temperature of 60 and 90 minutes. For KOH-NH4OH cooks, KOH dosage of 5 and %10 and NH4OH dosage of 25 and %35, both based as oven dry of straw were applied. Besides, time at maximum temperature was 90 minutes. Yield ranges of KOH and KOH-NH4OH pulp samples were obtained from 37.28 to 48.62 and 45.63 to 48.08 percent, respectively. In addition, Kappa number ranged from 21.91 to 29.85 and 55.15 to 56.25, respectively. In comparison with soda, soda-AQ, cold soda, kraft, EDA (dissolving), De-Ethylene Glycol (dissolving), burst and tensile index for KOH pulp was more in similar cooking condition. With an exception of soda pulps, tear index of the mentioned pulp is more than all compared treatments. Therefore, it can be resulted that KOH pulping method is an appropriate choice for making paper of the rice straw. Also, compared to KOH-NH4OH, KOH pulping method is more appropriate choice because of better pulping results.Keywords: environmentally friendly process, rice straw, NH4OH-KOH pulping, pulp properties
Procedia PDF Downloads 2762494 The Come and Goes: How Does ‘Citywalk’ Influence Everyday Inhabitation and Urban Revitalization in a Chinese Atmospheric Community
Authors: Xiangxiang Chen
Abstract:
This paper explores a recent online trending activity in metropolitan China. Originating from Jane Jacob's walking tour, 'citywalk' has gradually developed into a wanghong (social media trending) activity, contributing to a revitalized mode of urbanism in post-modernized China. Former researchers have dug into the walking patterns in everyday cities, but few have looked into the short trip activities conducted by local residents and people nearby. Although some Chinese researchers have focused on wanghong economy and the related wanghong urbanism, they have linked it to the 'check-in' activities but not the 'citywalk', which connects several spots for 'checking in' and usually take place in a historic while cultural neighborhood. Besides, many research articles have focused on gentrification, but few have explored a gentrification pattern that differs from that in developed countries. This research uses short semi-structured interviews, which range from 3 to 5 minutes, combining a comparison model to find out the reasons and the feelings of people to go on the 'citywalk' and the economic development influenced by it. The research location was in Foshan's most historic area -the Chuihong neighborhood, which is situated in the metropolitan area of the Guangdong province. The paper finds out that social media in China has heavily influenced urban revitalization, leading to a new kind of gentrification mode. This suggests that the government should give historical and cultural neighborhoods enough freedom to develop independently. This paper aims to provide urban revitalization strategies to build a 'citywalk' friendly and aesthetically attractive neighborhood in China.Keywords: tourism development, urban revitalization, social media, wanghong urbanism, city walk, China
Procedia PDF Downloads 302493 Cost Effective Microfabrication Technique for Lab on Chip (LOC) Devices Using Epoxy Polymers
Authors: Charmi Chande, Ravindra Phadke
Abstract:
Microfluidics devices are fabricated by using multiple fabrication methods. Photolithography is one of the common methods wherein SU8 is widely used for making master which in turn is used for making working chip by the process of soft lithography. The high-aspect ratio features of SU-8 makes it suitable to be used as micro moulds for injection moulding, hot embossing, and moulds to form polydimethylsiloxane (PDMS) structures for bioMEMS (Microelectromechanical systems) applications. But due to high cost, difficulty in procuring and need for clean room, restricts the use of this polymer especially in developing countries and small research labs. ‘Bisphenol –A’ based polymers in mixture with curing agent are used in various industries like Paints and coatings, Adhesives, Electrical systems and electronics, Industrial tooling and composites. We present the novel use of ‘Bisphenol – A’ based polymer in fabricating micro channels for Lab On Chip(LOC) devices. The present paper describes the prototype for production of microfluidics chips using range of ‘Bisphenol-A’ based polymers viz. GY 250, ATUL B11, DER 331, DER 330 in mixture with cationic photo initiators. All the steps of chip production were carried out using an inexpensive approach that uses low cost chemicals and equipment. This even excludes the need of clean room. The produced chips using all above mentioned polymers were validated with respect to height and the chip giving least height was selected for further experimentation. The lowest height achieved was 7 micrometers by GY250. The cost of the master fabricated was $ 0.20 and working chip was $. 0.22. The best working chip was used for morphological identification and profiling of microorganisms from environmental samples like soil, marine water and salt water pan sites. The current chip can be adapted for various microbiological screening experiments like biochemical based microbial identification, studying uncultivable microorganisms at single cell/community level.Keywords: bisphenol–A based epoxy, cationic photoinitiators, microfabrication, photolithography
Procedia PDF Downloads 2902492 Role of Selenium and Vitamin E in Occupational Exposure to Heavy Metals (Mercury, Lead and Cadmium): Impact of Working in Lamp Factory
Authors: Tarek Elnimr, Rabab El-kelany
Abstract:
Heavy metals are environmental contaminants that may pose long-term health risks. Unfortunately, the consequent implementation of preventive measures was generally delayed, causing important negative effects to the exposed populations. The objective of this study was to determine whether co-consumption of nutritional supplements as selenium and vitamin E would treat the hazardous effects of exposure to mercury, lead and cadmium. 108 workers (60 males and 48 females) were the subject of this study, their ages ranged from 19-63 years, (M = 29.5±10.12). They were working in lamp factory for an average of 0.5-40 years (M= 5.3±8.8). Twenty control subjects matched for age and gender were used for comparison. All workers were subjected to neuropsychiatric evaluation. General Health Questionnaire (GHQ-28) revealed that 44.4% were complaining of anxiety, 52.7% of depression, 41.6% of social dysfunction and 22.2% of somatic symptoms. Cognitive tests revealed that long-term memory was not affected significantly when compared with controls, while short term memory and perceptual ability were affected significantly. Blood metal levels were measured by Inductively Coupled Plasma – optical emission spectrometry(ICP-OES), and revealed that the mean blood mercury, lead and cadmium concentrations before treatment were 1.6 mg/l, 0.39 mg/l and 1.7 µg/l, while they decreased significantly after treatment to 1.2 mg/l, 0.29 mg/l and 1.3 µg/l respectively. Anti-oxidative enzymes (paraoxonase and catalase) and lipid peroxidation product (malondialdehyde) were measured before and after treatment with selenium and vitamin E, and showed significant improvement. It could be concluded that co-consumption of selenium and vitamin E produces significant decrease in mercury, lead and cadmium levels in blood.Keywords: mercury, lead, cadmium, neuropsychiatric impairment, selenium, vitamin E
Procedia PDF Downloads 3482491 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 604