Search results for: hand function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8309

Search results for: hand function

1049 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 180
1048 Randomized Trial of Tian Jiu Therapy in San Fu Days for Patients with Chronic Asthma

Authors: Libing Zhu, Waichung Chen, Kwaicing Lo, Lei Li

Abstract:

Background: Tian Jiu Therapy (a medicinal vesiculation therapy according to traditional Chinese medicine theory) in San Fu Days (the three hottest days in a year is calculated by the Chinese ancient calendar) is widely used by patients with chronic asthma in China although from modern medicine perspective there is insufficient evidence of its effectiveness and safety issues. We investigated the efficacy and safety of Tian Jiu Therapy compared with placebo in patients with chronic asthma. Methods: Patients with chronic asthma were randomly assigned to Tian Jiu treatment group (n=165), placebo control group (n=158). Registered Chinese Medicine practitioners, in Orthopedics-Traumatology, Acupuncture, and Tui-na Clinical Centre for Teaching and Research, School of Chinese Medicine, The University of Hong Kong, administered Tian Jiu Therapy and placebo treatment in 3 times over 2 months. Patients completed questionnaires and lung function test before treatment and after treatment, 3, 6, 9, and 11 months, respectively. The primary outcome was the no of asthma-related sub-healthy symptoms and the percentage of patients with twenty-three symptoms. Results: 451 patients were recruited totally, 111 patients refused or did not participate according the appointment time and 17 did not meet the inclusion criteria. Consequently, 323 of eligible patients were enrolled. There was nothing difference between Tian Jiu Therapy group and placebo control group at the end of all treatments neither primary nor secondary outcomes. While Tian Jiu Therapy as compared with placebo significantly reduced the percentage of participants who are susceptible waken up by asthma symptoms from 27% to 14% at 2nd follow-up (P < 0.05). Similarly, Tian Jiu Therapy significantly reduced the proportion of participants who had the symptom of running nose and sneezing before onset from 18% to 8% at 2nd follow-up (P < 0.05). Additionally, Tian Jiu Therapy significantly reduced the level of asthma, the proportion of participants who don’t need to processed during asthma attack increased from 6% to 15% at 1st follow-up and 0% to 7% at 3rd follow-up (P < 0.05). Improvements also occurred with Tian Jiu Therapy group, it reduced the proportion of participants who were spontaneously sweating at 3rd follow up and diarrhea after intake of oily food at 4th follow-up (P < 0.05). Conclusion: When added to a regimen of foundational therapy for chronic asthma participants, Tian Jiu Therapy further reduced the need for medications to control asthma, improved the quality of participants’ life, and significantly reduced the level of asthma. What is more, this benefit seems to have an accumulative effect over time was in accordance with the TCM theory of 'winter disease is being cured in summer'.

Keywords: asthma, Tian Jiu Therapy, San Fu Days, triaditional Chinese medicine, clinical trial

Procedia PDF Downloads 294
1047 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 200
1046 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 153
1045 Driver of Migration and Appropriate Policy Concern Considering the Southwest Coastal Part of Bangladesh

Authors: Aminul Haque, Quazi Zahangir Hossain, Dilshad Sharmin Chowdhury

Abstract:

The human migration is getting growing concern around the world, and recurrent disasters and climate change impact have great influence on migration. Bangladesh is one of the disaster prone countries that/and has greater susceptibility to stress migration by recurrent disasters and climate change. The study was conducted to investigate the factors that have a strong influence on current migration and changing pattern of life and livelihood means of the southwest coastal part of Bangladesh. Moreover, the study also revealed a strong relationship between disasters and migration and appropriate policy concern. To explore this relation, both qualitative and quantitative methods were applied to a questionnaire survey at household level and simple random sampling technique used in the sampling process along with different secondary data sources for understanding policy concern and practices. The study explores the most influential driver of migration and its relationship with social, economic and environmental drivers. The study denotes that, the environmental driver has a greater effect on the intention of permanent migration (t=1.481, p-value=0.000) at the 1 percent significance level. The significant number of respondents denotes that abrupt pattern of cyclone, flood, salinity intrusion and rainfall are the most significant environmental driver to make a decision on permanent migration. The study also found that the temporary migration pattern has 2-fold increased compared to last ten (10) years. It also appears from the study that environmental factors have a great implication on the changing pattern of the occupation of the study area and it has reported that about 76% of the respondent now in the changing modality of livelihood compare to their traditional practices. The study bares that the migration has foremost impact on children and women by increasing hardship and creating critical social security. The exposure-route of permanent migration is not smooth indeed, these migrations creating urban and conflict in Chittagong hill tracks of Bangladesh. The study denotes that there is not any safeguard of the stress migrant on existing policy and not have any measures for safe migration and resettlement rather considering the emergency response and shelter. The majority of (98%) people believes that migration is not to be the adoption strategies, but contrary to this young group of respondent believes that safe migration could be the adaptation strategy which could bring a positive result compare to the other resilience strategies. On the other hand, the significant number of respondents uttered that appropriate policy measure could be an adaptation strategy for being the formation of a resilient community and reduce the migration by meaningful livelihood options with appropriate protection measure.

Keywords: environmental driver, livelihood, migration, resilience

Procedia PDF Downloads 247
1044 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 71
1043 Transformation in Palliative Care Delivery in Surgery

Authors: W. L. Tsang, H. Y. Li, S. L. Wong, T. Y. Kwok, S. C. Yuen, S. S. Kwok, P. S. Ko, S. Y. Lau

Abstract:

Introduction: Palliative care is no doubt necessary in surgery. When one looks at studies of what patients with life-threatening illness want and compares to what they experience in surgical units, the gap is huge. Surgical nurses, being patient advocates, should engage with patients and families sooner rather than later in their illness trajectories to consider how to manage the illness, not just their capacity to survive. Objective: This clinical practice guide aims to fill the service gap of palliative care in surgery by producing a quality-driven, evidence-based yet straightforward clinical practice guide based on a focus strategy. Methodology: In line with Guide to Good Nursing Practice: End-of-Life Care recommended by Nursing Council of Hong Kong and the strategic goal of improving quality of palliative care proposed in HA Strategic Plan 2017-2022, multiple phases of work were undertaken from July 2015 to December 2017. A pragmatic clinical practice guide for surgical patients facing life-threatening conditions was developed based on assessments on knowledge of and attitudes towards end-of-life care of surgical nurses. Key domains, including preparation for bereavement, nursing care for imminently dying patients and at the dying scene were crystallized according to the results of the assessments and the palliative care checklist formulated by UCH Palliative Care Team. After a year of rollout, its content was refined through analyses of implementation in routine practice and consensus opinions from frontline nurses. Results and Outcomes: This clinical practice guide inspires surgical nurses with the art of care to provide for patients’ comfort, function, and longevity. It provides practical directions and assists nurses to master the skills on advance care planning and learn how to be clear with patients, families and themselves about the realities of the disease pictures. Through the implementation, patients and families are included in the decision process, and their wishes are honored. The delivery of explicit and high-quality palliative care maintains good nurse-to-patient relations and enhances satisfaction of hospital care of patients and families. Conclusion: Surgical nursing has always been up to the unique challenges of the era. This clinical practice guide has become an island of credibility for our nurses as they traverse the often stormy waters of life-limiting illness.

Keywords: palliative care delivery, palliative care in surgery, hospice care, end-of-life care

Procedia PDF Downloads 234
1042 Global Experiences in Dealing with Biological Epidemics with an Emphasis on COVID-19 Disease: Approaches and Strategies

Authors: Marziye Hadian, Alireza Jabbari

Abstract:

Background: The World Health Organization has identified COVID-19 as a public health emergency and is urging governments to stop the virus transmission by adopting appropriate policies. In this regard, authorities have taken different approaches to cut the chain or controlling the spread of the disease. Now, the questions we are facing include what these approaches are? What tools should be used to implement each preventive protocol? In addition, what is the impact of each approach? Objective: The aim of this study was to determine the approaches to biological epidemics and related prevention tools with an emphasis on COVID-19 disease. Data sources: Databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, and ProQuest were employed for data extraction. Furthermore, authentic sources such as the WHO website, the published reports of relevant countries, as well as the Worldometer website were evaluated for gray studies. The time-frame of the study was from 1 December 2019 to 30 May 2020. Methods: The present study was a systematic study of publications related to the prevention strategies for the COVID-19 disease. The study was carried out based on the PRISMA guidelines and CASP for articles and AACODS for grey literature. Results: The study findings showed that in order to confront the COVID-19 epidemic, in general, there are three approaches of "mitigation", "active control" and "suppression" and four strategies of "quarantine", "isolation", "social distance" and "lockdown" in both individual and social dimensions to deal with epidemics. Selection and implementation of each approach requires specific strategies and has different effects when it comes to controlling and inhibiting the disease. Key finding: One possible approach to control the disease is to change individual behavior and lifestyle. In addition to prevention strategies, use of masks, observance of personal hygiene principles such as regular hand washing and non-contact of contaminated hands with the face, as well as an observance of public health principles such as sneezing and coughing etiquettes, safe extermination of personal protective equipment, must be strictly observed. Have not been included in the category of prevention tools. However, it has a great impact on controlling the epidemic, especially the new coronavirus epidemic. Conclusion: Although the use of different approaches to control and inhibit biological epidemics depends on numerous variables, however, despite these requirements, global experience suggests that some of these approaches are ineffective. The use of previous experiences in the world, along with the current experiences of countries, can be very helpful in choosing the accurate approach for each country in accordance with the characteristics of that country and lead to the reduction of possible costs at the national and international levels.

Keywords: novel corona virus, COVID-19, approaches, prevention tools, prevention strategies

Procedia PDF Downloads 113
1041 HIV-1 Nef Mediates Host Invasion by Differential Expression of Alpha-Enolase

Authors: Reshu Saxena, R. K. Tripathi

Abstract:

HIV-1 transmission and spread involves significant host-virus interaction. Potential targets for prevention of HIV-1 lies at the site of mucosal barriers. Thus a better understanding of how HIV-1 infects target cells at such sites and lead their invasion is required, with prime focus on the host determinants regulating HIV-1 spread. HIV-1 Nef is important for viral infectivity and pathogenicity. It promotes HIV-1 replication, facilitating immune evasion by interacting with various host factors and altering cellular pathways via multiple protein-protein interactions. In this study nef was sequenced from HIV-1 patients, and showed specific mutations revealing sequence variability in nef. To explore the difference in Nef functionality based on sequence variability we have studied the effects of HIV-1 Nef in human SupT1 T cell line and (THP-1) monocyte-macrophage cell lines through proteomics approach. 2D-Gel Electrophoresis in control and Nef-transfected SupT1 cells demonstrated several differentially expressed proteins with significant modulation of alpha-enolase. Through further studies, effects of Nef on alpha-enolase regulation were found to be cell lineage-specific, being stimulatory in macrophages/monocytes, inhibitory in T cells and without effect in HEK-293 cells. Cell migration and invasion studies were employed to determine biological function affected by Nef mediated regulation of alpha-enolase. Cell invasion was enhanced in THP-1 cells but was inhibited in SupT1 cells by wildtype nef. In addition, the modulation of enolase and cell invasion remained unaffected by a unique nef variant. These results indicated that regulation of alpha-enolase expression and invasive property of host cells by Nef is sequence specific, suggesting involvement of a particular motif of Nef. To precisely determine this site, we designed a heptapeptide including the suggested alpha-enolase regulating sequence of nef and a nef mutant with deletion of this site. Macrophages/monocytes being the major cells affected by HIV-1 at mucosal barriers, were particularly investigated by the nef mutant and peptide. Both the nef mutant and heptapeptide led to inhibition of enhanced enolase expression and increased invasiveness in THP-1 cells. Together, these findings suggest a possible mechanism of host invasion by HIV-1 through Nef mediated regulation of alpha-enolase and identifies a potential therapeutic target for HIV-1 entry at mucosal barriers.

Keywords: HIV-1 Nef, nef variants, host-virus interaction, tissue invasion

Procedia PDF Downloads 392
1040 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 245
1039 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area

Authors: Michelle Eliane Hernández-García, Angélica Lozano

Abstract:

The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.

Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks

Procedia PDF Downloads 121
1038 Examination of How Do Smart Watches Influence the Market of Luxury Watches with Particular Regard of the Buying-Reasons

Authors: Christopher Benedikt Jakob

Abstract:

In our current society, there is no need to take a look at the wristwatch to know the exact time. Smartphones, the watch in the car or the computer watch, inform us about the time too. Over hundreds of years, luxury watches have held a fascination for human beings. Consumers buy watches that cost thousands of euros, although they could buy much cheaper watches which also fulfill the function to indicate the correct time. This shows that the functional value has got a minor meaning with reference to the buying-reasons as regards luxury watches. For a few years, people have an increased demand to track data like their walking distance per day or to track their sleep for example. Smart watches enable consumers to get information about these data. There exists a trend that people intend to optimise parts of their social life, and thus they get the impression that they are able to optimise themselves as human beings. With the help of smart watches, they are able to optimise parts of their productivity and to realise their targets at the same time. These smart watches are also offered as luxury models, and the question is: how will customers of traditional luxury watches react? Therefore this study has the intention to give answers to the question why people are willing to spend an enormous amount of money on the consumption of luxury watches. The self-expression model, the relationship basis model, the functional benefit representation model and the means-end-theory are chosen as an appropriate methodology to find reasons why human beings purchase specific luxury watches and luxury smart watches. This evaluative approach further discusses these strategies concerning for example if consumers buy luxury watches/smart watches to express the current self or the ideal self and if human beings make decisions on expected results. The research critically evaluates that relationships are compared on the basis of their advantages. Luxury brands offer socio-emotional advantages like social functions of identification and that the strong brand personality of luxury watches and luxury smart watches helps customers to structure and retrieve brand awareness which simplifies the process of decision-making. One of the goals is to identify if customers know why they like specific luxury watches and dislike others although they are produced in the same country and cost comparable prices. It is very obvious that the market for luxury watches especially for luxury smart watches is changing way faster than it has been in the past. Therefore the research examines the market changing parameters in detail.

Keywords: buying-behaviour, brand management, consumer, luxury watch, smart watch

Procedia PDF Downloads 187
1037 Decentralized Forest Policy for Natural Sal (Shorea robusta) Forests Management in the Terai Region of Nepal

Authors: Medani Prasad Rijal

Abstract:

The study outlines the impacts of decentralized forest policy on natural Sal (shorea robusta) forests in the Terai region of Nepal. The government has implemented community forestry program to manage the forest resources and improve the livelihood of local people collectively. The forest management authorities such as conserve, manage, develop and use of forest resources were shifted to the local communities, however, the ownership right of the forestland retained by the government. Local communities took the decision on harvesting, distribution, and sell of forest products by fixing the prices independently. The local communities were putting the low value of forest products and distributed among the user households on the name of collective decision. The decision of low valuation is devaluating the worth of forest products. Therefore, the study hypothesized that decision-making capacities are equally prominent next to the decentralized policy and program formulation. To accomplish the study, individual to group level discussions and questionnaire survey methods were applied with executive committee members and user households. The study revealed that the local intuition called Community Forest User Group (CFUG) committee normally took the decisions on consensus basis. Considering to the access and affording capacity of user households having poor economic backgrounds, low pricing mechanism of forest products has been practiced, even though the Sal timber is far expensive in the local market. The local communities thought that low pricing mechanism is accessible to all user households from poor to better off households. However, the analysis of forest products distribution opposed the assumption as most of the Sal timber, which is the most valuable forest product of community forest only purchased by the limited households of better economic conditions. Since the Terai region is heterogeneous by socio-economic conditions, better off households always have higher affording capacity and possibility of taking higher timber benefits because of low price mechanism. On the other hand, the minimum price rate of forest products has poor contribution in community fund collection. Consequently, it has poor support to carry out poverty alleviation activities to poor people. The local communities have been fixed Sal timber price rate around three times cheaper than normal market price, which is a strong evidence of forest product devaluation itself. Finally, the study concluded that the capacity building of local executives as the decision-makers of natural Sal forests is equally indispensable next to the policy and program formulation for effective decentralized forest management. Unilateral decentralized forest policy may devaluate the forest products rather than devolve of power to the local communities and empower to them.

Keywords: community forestry program, decentralized forest policy, Nepal, Sal forests, Terai

Procedia PDF Downloads 319
1036 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production

Authors: Mohammad Emamjome Kashan, Alan S. Fung

Abstract:

In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.

Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector

Procedia PDF Downloads 108
1035 Research on the Planning Spatial Mode of China's Overseas Industrial Park

Authors: Sidong Zhao, Xingping Wang

Abstract:

Recently, the government of China has provided strong support the developments of overseas industrial parks. The global distribution of China overseas industrial parks has gradually moved from the 'sparks of fire' to the 'prairie fires.' The support and distribution have promoted developing overseas industrial parks to a strategy of constructing a China's new open economic system and a typical representative of the 'Chinese wisdom' and the 'China's plans' that China has contributed to the globalization of the new era under the initiative of the Belt and Road. As the industrial parks are the basis of 'work/employment', a basic function of a city (Athens Constitution), planning for developments of industrial parks has become a long-term focus of urban planning. Based on the research of the planning and the analysis of the present developments of some typical China overseas industrial parks, we found some interesting rules: First, large numbers of the China overseas industrial parks are located in less developed countries. These industrial parks have become significant drives of the developments of the host cities and even the regions in those countries, especially in investment, employment and paid tax fee for the local, etc. so, the planning and development of overseas industrial parks have received extensive attention. Second, there are some problems in the small part of the overseas Park, such as the planning of the park not following the planning of the host city and lack of implementation of the park planning, etc. These problems have led to the difficulties of the implementation of the planning and the sustainable developments of the parks. Third, a unique pattern of space development has been formed. in the dimension of the patterns of regional spatial distribution, there are five characteristics - along with the coast, along the river, along with the main traffic lines and hubs, along with the central urban area and along the connections of regions economic. In the dimension of the spatial relationships between the industrial park and the city, there is a growing and evolving trend as 'separation – integration - union'. In the dimension of spatial mode of the industrial parks, there are different patterns of development, such as a specialized industrial park, complex industrial park, characteristic town and new urban area of industry, etc. From the perspective of the trends of the developments and spatial modes, in the future, the planning of China overseas industrial parks should emphasize the idea of 'building a city based on the industrial park'. In other words, it's making the developments of China overseas industrial parks move from 'driven by policy' to 'driven by the functions of the city', accelerating forming the system of China overseas industrial parks and integrating the industrial parks and the cities.

Keywords: overseas industrial park, spatial mode, planning, China

Procedia PDF Downloads 181
1034 Changes in Attitudes of State Towards Orthodox Church: Greek Case after Eurozone Crisis in Alexis Tsipras Era

Authors: Zeynep Selin Balci, Altug Gunal

Abstract:

Religion has always an effect on the policies of states. In the case of religion having a central role in defining identity, especially when becoming an independent state, the bond between religious authority and state cannot easily be broken. As independence of Greece from the Ottoman Empire was acquired at the same time with the creation of its own church under the name of the Church of Greece by declaring its independence from the Greek Orthodox Patriarchate in Istanbul, the new church became an important part of Greek national identity. As the Church has the ability to influence Greeks, its rituals, public appearances, and practices are used to provide support to the state. Although there sometimes have been controversies between church and state, it has always been a fact that church is an integral part of the state, which is proved by that paying the salaries of priest by state payroll and them being naturally civil servants. European Union membership, on the other hand, has a changing impact on this relationship. This impact started to be more visible in 2000 when then government decided to exclude the religion section from identity cards. Church’s reaction was to gather people around recalling their religious identity and followed by redefining the content of nationality, which aspired nationalist fronts. After 2015 when leftist coalition Syriza and its self-described atheist leader came to power, the situation for nationalists and Church became more tangling in addition to the economic crisis started in 2010 and evolved into the Eurozone crisis by affecting not only Greece but also other members. Although the church did not have direct confrontations with the government, the fact that Tsipras refused to take the oath on Bible created tensions because it was not acceptable for a state whose Constitution starts ‘in the name of the Holy, Consubstantial and Indivisible Trinity’. Moreover, austerity measures to overcome the economic crisis, which affected the everyday life of citizens in terms of both prices and salaries, did not harm the church’s economic situation much. Considering church being the second biggest landowner after state and paying no taxes, the fact that church was exempt from austerity measures showed to the government the necessity to find a way to make church contribute to solution for the crisis. In 2018, when the government agreed with the head of the church on cutting off the priests from government payroll automatically meaning to end priests’ civil servant status, it created tensions both for church and in society. As a result of the elections held in July 2019, Tsipras could not have the chance to apply the decision as he left the office. In light of these, this study aims to analyze the position of the church in the economic crisis and its effects on Tsipras term. In order to sufficiently understand this, it is to look at the historical changing points of Church’s influence in Greek’s eyes.

Keywords: Eurozone crisis, Greece, Orthodox Church, Tsipras

Procedia PDF Downloads 109
1033 Morphological and Chemical Characterization of the Surface of Orthopedic Implant Materials

Authors: Bertalan Jillek, Péter Szabó, Judit Kopniczky, István Szabó, Balázs Patczai, Kinga Turzó

Abstract:

Hip and knee prostheses are one of the most frequently used medical implants, that can significantly improve patients’ quality of life. Long term success and biointegration of these prostheses depend on several factors, like bulk and surface characteristics, construction and biocompatibility of the material. The applied surgical technique, the general health condition and life-quality of the patient are also determinant factors. Medical devices used in orthopedic surgeries have different surfaces depending on their function inside the human body. Surface roughness of these implants determines the interaction with the surrounding tissues. Numerous modifications have been applied in the recent decades to improve a specific property of an implant. Our goal was to compare the surface characteristics of typical implant materials used in orthopedic surgery and traumatology. Morphological and chemical structure of Vortex plate anodized titanium, cemented THR (total hip replacement) stem high nitrogen REX steel (SS), uncemented THR stem and cup titanium (Ti) alloy with titanium plasma spray coating (TPS), cemented cup and uncemented acetabular liner HXL and UHMWPE and TKR (total knee replacement) femoral component CoCrMo alloy (Sanatmetal Ltd, Hungary) discs were examined. Visualization and elemental analysis were made by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). Surface roughness was determined by atomic force microscopy (AFM) and profilometry. SEM and AFM revealed the morphological and roughness features of the examined materials. TPS Ti presented the highest Ra value (25 ± 2 μm, followed by CoCrMo alloy (535 ± 19 nm), Ti (227 ± 15 nm) and stainless steel (170 ± 11 nm). The roughness of the HXL and UHMWPE surfaces was in the same range, 147 ± 13 nm and 144 ± 15 nm, respectively. EDS confirmed typical elements on the investigated prosthesis materials: Vortex plate Ti (Ti, O, P); TPS Ti (Ti, O, Al); SS (Fe, Cr, Ni, C) CoCrMo (Co, Cr, Mo), HXL (C, Al, Ni) and UHMWPE (C, Al). The results indicate that the surface of prosthesis materials have significantly different features and the applied investigation methods are suitable for their characterization. Contact angle measurements and in vitro cell culture testing are further planned to test their surface energy characteristics and biocompatibility.

Keywords: morphology, PE, roughness, titanium

Procedia PDF Downloads 111
1032 Neurotoxic Effects Assessment of Metformin in Danio rerio

Authors: Gustavo Axel Elizalde-Velázquez

Abstract:

Metformin is the first line of oral therapy to treat type II diabetes and is also employed as a treatment for other indications, such as polycystic ovary syndrome, cancer, and COVID-19. Recent data suggest it is the aspirin of the 21st century due to its antioxidant and anti-aging effects. However, increasingly current articles indicate its long-term consumption generates mitochondrial impairment. Up to date, it is known metformin increases the biogenesis of Alzheimer's amyloid peptides via up-regulating BACE1 transcription, but further information related to brain damage after its consumption is missing. Bearing in mind the above, this work aimed to establish whether or not chronic exposure to metformin may alter swimming behavior and induce neurotoxicity in Danio rerio adults. For this purpose, 250 Danio rerio grown-ups were assigned to six tanks of 50 L of capacity. Four of the six systems contained 50 fish, while the remaining two had 25 fish (≈1 male:1 female ratio). Every system with 50 fish was allocated one of the three metformin treatment concentrations (1, 20, and 40 μg/L), with one system as the control treatment. Systems with 25 fish, on the other hand, were used as positive controls for acetylcholinesterase (10 μg/L of Atrazine) and oxidative stress (3 μg/L of Atrazine). After four months of exposure, a mean of 32 fish (S.D. ± 2) per group of MET treatment survived, which were used for the evaluation of behavior with the Novel Tank test. Moreover, after the behavioral assessment, we aimed to collect the blood and brains of all fish from all treatment groups. For blood collection, fish were anesthetized with an MS-222 solution (150 mg/L), while for brain gathering, fish were euthanized using the hypothermic shock method (2–4 °C). Blood was employed to determine CASP3 activity and the percentage of apoptotic cells with the TUNEL assay, and brains were used to evaluate acetylcholinesterase activity, oxidative damage, and gene expression. After chronic exposure, MET-exposed fish exhibited less swimming activity when compared to control fish. Moreover, compared with the control group, MET significantly inhibited the activity of AChE and induced oxidative damage in the brain of fish. Concerning gene expression, MET significantly upregulated the expression of Nrf1, Nrf2, BAX, p53, BACE1, APP, PSEN1, and downregulated CASP3 and CASP9. Although MET did not overexpress the CASP3 gene, we saw a meaningful rise in the activity of this enzyme in the blood of fish exposed to MET compared to the control group, which we then confirmed by a high number of apoptotic cells in the TUNEL assay. To the best of our understanding, this is the first study that delivers evidence of oxidative impairment, apoptosis, AChE alteration, and overexpression of B- amyloid-related genes in the brain of fish exposed to metformin.

Keywords: AChE inhibition, CASP3 activity, NovelTank test, oxidative damage, TUNEL assay

Procedia PDF Downloads 70
1031 Comparison of Two Methods of Cryopreservation of Testicular Tissue from Prepubertal Lambs

Authors: Rensson Homero Celiz Ygnacio, Marco Aurélio Schiavo Novaes, Lucy Vanessa Sulca Ñaupas, Ana Paula Ribeiro Rodrigues

Abstract:

The cryopreservation of testicular tissue emerges as an alternative for the preservation of the reproductive potential of individuals who still cannot produce sperm; however, they will undergo treatments that may affect their fertility (e.g., chemotherapy). Therefore, the present work aims to compare two cryopreservation methods (slow freezing and vitrification) in testicular tissue of prepubertal lambs. For that, to obtain the testicular tissue, the animals were castrated and the testicles were collected immediately in a physiological solution supplemented with antibiotics. In the laboratory, the testis was split into small pieces. The total size of the testicular fragments was 3×3x1 mm³ and was placed in a dish contained in Minimum Essential Medium (MEM-HEPES). The fragments were distributed randomly into non-cryopreserved (fresh control), slow freezing (SF), and vitrified. To SF procedures, two fragments from a given male were then placed in a 2,0 mL cryogenic vial containing 1,0 mL MEM-HEPES supplemented with 20% fetal bovine serum (FBS) and 20% dimethylsulfoxide (DMSO). Tubes were placed into a Mr. Frosty™ Freezing container with isopropyl alcohol and transferred to a -80°C freezer for overnight storage. On the next day, each tube was plunged into liquid nitrogen (NL). For vitrification, the ovarian tissue cryosystem (OTC) device was used. Testicular fragments were placed in the OTC device and exposed to the first vitrification solution composed of MEM-HEPES supplemented with 10 mg/mL Bovine Serum Albumin (BSA), 0.25 M sucrose, 10% Ethylene glycol (EG), 10% DMSO and 150 μM alpha-lipoic acid for four min. The VS1 was discarded and then the fragments were submerged into a second vitrification solution (VS2) containing the same composition of VS1 but 20% EG and 20% DMSO. VS2 was then discarded and each OTC device containing up to four testicular fragments was closed and immersed in NL. After the storage period, the fragments were removed from the NL, kept at room temperature for one min and then immersed at 37 °C in a water bath for 30 s. Samples were warmed by sequentially immersing in solutions of MEM-HEPES supplemented with 3 mg/mL BSA and decreasing concentrations of sucrose. Hematoxylin-eosin staining to analyze the tissue architecture was used. The score scale used was from 0 to 3, classified with a score 0 representing normal morphologically, and 3 were considered a lot of alteration. The histomorphological evaluation of the testicular tissue shows that when evaluating the nuclear alteration (distinction of nucleoli and condensation of nuclei), there are no differences when using slow freezing with respect to the control. However, vitrification presents greater damage (p <0.05). On the other hand, when evaluating the epithelial alteration, we observed that the freezing showed scores statistically equal to the control in variables such as retraction of the basement membrane, formation of gaps and organization of the peritubular cells. The results of the study demonstrated that cryopreservation using the slow freezing method is an excellent tool for the preservation of pubertal testicular tissue.

Keywords: cryopreservation, slow freezing, vitrification, testicular tissue, lambs

Procedia PDF Downloads 154
1030 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index

Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli

Abstract:

Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.

Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index

Procedia PDF Downloads 203
1029 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.

Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus

Procedia PDF Downloads 166
1028 Effects of Renin Angiotensin Pathway Inhibition on Efficacy of Anti-PD-1/PD-L1 Treatment in Metastatic Cancer

Authors: Philip Friedlander, John Rutledge, Jason Suh

Abstract:

Inhibition of programmed death-1 (PD-1) or its ligand PD-L1 confers therapeutic efficacy in a wide range of solid tumor malignancies. Primary or acquired resistance can develop through activation of immunosuppressive immune cells such as tumor-associated macrophages. The renin angiotensin system (RAS) systemically regulates fluid and sodium hemodynamics, but components are expressed on and regulate the activity of immune cells, particularly of myeloid lineage. We hypothesized that inhibition of RAS would improve the efficacy of PD-1/PD-L-1 treatment. A retrospective analysis was performed through a chart review of patients with solid metastatic malignancies treated with a PD-1/PD-L1 inhibitor between 1/2013 and 6/2019 at Valley Hospital, a community hospital in New Jersey, USA. Efficacy was determined by medical oncologist documentation of clinical benefit in visit notes and by the duration of time on immunotherapy treatment. The primary endpoint was the determination of efficacy differences in patients treated with an inhibitor of RAS ( ace inhibitor, ACEi, or angiotensin blocker, ARB) compared to patients not treated with these inhibitors. To control for broader antihypertensive effects, efficacy as a function of treatment with beta blockers was assessed. 173 patients treated with PD-1/PD-L-1 inhibitors were identified of whom 52 were also treated with an ACEi or ARB. Chi-square testing revealed a statistically significant relationship between being on an ACEi or ARB and efficacy to PD-1/PD-L-1 therapy (p=0.001). No statistically significant relationship was seen between patients taking or not taking beta blocker antihypertensives (p= 0.33). Kaplan-Meier analysis showed statistically significant improvement in the duration of therapy favoring patients concomitantly treated with ACEi or ARB compared to patients not exposed to antihypertensives and to those treated with beta blockers. Logistic regression analysis revealed that age, gender, and cancer type did not have significant effects on the odds of experiencing clinical benefit (p=0.74, p=0.75, and p=0.81, respectively). We conclude that retrospective analysis of the treatment of patients with solid metastatic tumors with anti-PD-1/PD-L1 in a community setting demonstrates greater clinical benefit in the context of concomitant ACEi or ARB inhibition, irrespective of gender or age. This data supports the development of prospective assessment through randomized clinical trials.

Keywords: angiotensin, cancer, immunotherapy, PD-1, efficacy

Procedia PDF Downloads 55
1027 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 143
1026 Realistic Modeling of the Preclinical Small Animal Using Commercial Software

Authors: Su Chul Han, Seungwoo Park

Abstract:

As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.

Keywords: mimics, preclinical small animal, segmentation, 3D printer

Procedia PDF Downloads 350
1025 Anticancer Activity of Milk Fat Rich in Conjugated Linoleic Acid Against Ehrlich Ascites Carcinoma Cells in Female Swiss Albino Mice

Authors: Diea Gamal Abo El-Hassan, Salwa Ahmed Aly, Abdelrahman Mahmoud Abdelgwad

Abstract:

The major conjugated linoleic acid (CLA) isomers have anticancer effect, especially breast cancer cells, inhibits cell growth and induces cell death. Also, CLA has several health benefits in vivo, including antiatherogenesis, antiobesity, and modulation of immune function. The present study aimed to assess the safety and anticancer effects of milk fat CLA against in vivo Ehrlich ascites carcinoma (EAC) in female Swiss albino mice. This was based on acute toxicity study, detection of the tumor growth, life span of EAC bearing hosts, and simultaneous alterations in the hematological, biochemical, and histopathological profiles. Materials and Methods: One hundred and fifty adult female mice were equally divided into five groups. Groups (1-2) were normal controls, and Groups (3-5) were tumor transplanted mice (TTM) inoculated intraperitoneally with EAC cells (2×106 /0.2 mL). Group (3) was (TTM positive control). Group (4) TTM fed orally on balanced diet supplemented with milk fat CLA (40 mg CLA/kg body weight). Group (5) TTM fed orally on balanced diet supplemented with the same level of CLA 28 days before tumor cells inoculation. Blood samples and specimens from liver and kidney were collected from each group. The effect of milk fat CLA on the growth of tumor, life span of TTM, and simultaneous alterations in the hematological, biochemical, and histopathological profiles were examined. Results: For CLA treated TTM, significant decrease in tumor weight, ascetic volume, viable Ehrlich cells accompanied with increase in life span were observed. Hematological and biochemical profiles reverted to more or less normal levels and histopathology showed minimal effects. Conclusion: The present study proved the safety and anticancer efficiency of milk fat CLA and provides a scientific basis for its medicinal use as anticancer attributable to the additive or synergistic effects of its isomers.

Keywords: anticancer activity, conjugated linoleic acid, Ehrlich ascites carcinoma, % increase in life span, mean survival time, tumor transplanted mice.

Procedia PDF Downloads 73
1024 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 98
1023 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 431
1022 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 23
1021 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 21
1020 Nitrate Photoremoval in Water Using Nanocatalysts Based on Ag / Pt over TiO2

Authors: Ana M. Antolín, Sandra Contreras, Francesc Medina, Didier Tichit

Abstract:

Introduction: High levels of nitrates (> 50 ppm NO3-) in drinking water are potentially risky to human health. In the recent years, the trend of nitrate concentration in groundwater is rising in the EU and other countries. Conventional catalytic nitrate reduction processes into N2 and H2O lead to some toxic intermediates and by-products, such as NO2-, NH4+, and NOx gases. Alternatively, photocatalytic nitrate removal using solar irradiation and heterogeneous catalysts is a very promising and ecofriendly technique. It has been scarcely performed and more research on highly efficient catalysts is still needed. In this work, different nanocatalysts supported on Aeroxide Titania P25 (P25) have been prepared varying: 0.5-4 % wt. Ag); Pt (2, 4 % wt.); Pt precursor (H2PtCl6/K2PtCl6); and impregnation order of both metals. Pt was chosen in order to increase the selectivity to N2 and decrease that to NO2-. Catalysts were characterized by nitrogen physisorption, X-Ray diffraction, UV-visible spectroscopy, TEM and X Ray-Photoelectron Spectroscopy. The aim was to determine the influence of the composition and the preparation method of the catalysts on the conversion and selectivity in the nitrate reduction, as well as going through an overall and better understanding of the process. Nanocatalysts synthesis: For the mono and bimetallic catalysts preparation, wise-drop wetness impregnation of the precursors (AgNO3, H2PtCl6, K2PtCl6) followed by a reduction step (NaBH4) was used to obtain the metal colloids. Results and conclusions: Denitration experiments were performed in a 350 mL PTFE batch reactor under inert standard operational conditions, ultraviolet irradiations (λ=254 nm (UV-C); λ=365 nm (UV-A)), and presence/absence of hydrogen gas as a reducing agent, contrary to most studies using oxalic or formic acid. Samples were analyzed by Ionic Chromatography. Blank experiments using respectively P25 (dark conditions), hydrogen only and UV irradiations without hydrogen demonstrated a clear influence of the presence of hydrogen on nitrate reduction. Also, they demonstrated that UV irradiation increased the selectivity to N2. Interestingly, the best activity was obtained under ultraviolet lamps, especially at a closer wavelength to visible light irradiation (λ = 365 nm) and H2. 2% Ag/P25 leaded to the highest NO3- conversion among the monometallic catalysts. However, nitrite quantities have to be diminished. On the other hand, practically no nitrate conversion was observed with the monometallics based on Pt/P25. Therefore, the amount of 2% Ag was chosen for the bimetallic catalysts. Regarding the bimetallic catalysts, it is observed that the metal impregnation order, amount and Pt precursor highly affects the results. Higher selectivity to the desirable N2 gas is obtained when Pt was firstly added, especially with K2PtCl6 as Pt precursor. This suggests that when Pt is secondly added, it covers the Ag particles, which are the most active in this reaction. It could be concluded that Ag allows the nitrate reduction step to nitrite, and Pt the nitrite reduction step toward the desirable N2 gas.

Keywords: heterogeneous catalysis, hydrogenation, nanocatalyst, nitrate removal, photocatalysis

Procedia PDF Downloads 251