Search results for: exchange index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4920

Search results for: exchange index

930 Orange Leaves and Rice Straw on Methane Emission and Milk Production in Murciano-Granadina Dairy Goat Diet

Authors: Tamara Romero, Manuel Romero-Huelva, Jose V. Segarra, Jose Castro, Carlos Fernandez

Abstract:

Many foods resulting from processing and manufacturing end up as waste, most of which is burned, dumped into landfills or used as compost, which leads to wasted resources, and environmental problems due to unsuitable disposal. Using residues of the crop and food processing industries to feed livestock has the advantage to obviating the need for costly waste management programs. The main residue generated in citrus cultivations and rice crop are pruning waste and rice straw, respectively. Within Spain, the Valencian Community is one of the world's oldest citrus and rice production areas. The objective of this experiment found out the effects of including orange leaves and rice straw as ingredients in the concentrate diets of goats, on milk production and methane (CH₄) emissions. Ten Murciano-Granadina dairy goats (45 kg of body weight, on average) in mid-lactation were selected in a crossover design experiment, where each goat received two treatments in 2 periods. Both groups were fed with 1.7 kg pelleted mixed ration; one group (n= 5) was a control (C) and the other group (n= 5) used orange leaves and rice straw (OR). The forage was alfalfa hay, and it was the same for the two groups (1 kg of alfalfa was offered by goat and day). The diets employed to achieve the requirements during lactation period for caprine livestock. The goats were allocated to individual metabolism cages. After 14 days of adaptation, feed intake and milk yield were recorded daily over a 5 days period. Physico-chemical parameters and somatic cell count in milk samples were determined. Then, gas exchange measurements were recorded individually by an open-circuit indirect calorimetry system using a head box. The data were analyzed by mixed model with diet and digestibility as fixed effect and goat as random effect. No differences were found for dry matter intake (2.23 kg/d, on average). Higher milk yield was found for C diet than OR (2.3 vs. 2.1 kg/goat and day, respectively) and, greater milk fat content was observed for OR than C (6.5 vs. 5.5%, respectively). The cheese extract was also greater in OR than C (10.7 vs. 9.6%). Goats fed OR diet produced significantly fewer CH₄ emissions than C diet (27 vs. 30 g/d, respectively). These preliminary results (LIFE Project LOWCARBON FEED LIFE/CCM/ES/000088) suggested that the use of these waste by-products was effective in reducing CH₄ emission without detrimental effect on milk yield.

Keywords: agricultural waste, goat, milk production, methane emission

Procedia PDF Downloads 127
929 Measurement of Radon Exhalation Rate, Natural Radioactivity, and Radiation Hazard Assessment in Soil Samples from the Surrounding Area of Kasimpur Thermal Power Plant Kasimpur (U. P.), India

Authors: Anil Sharma, Ajay Kumar Mahur, R. G. Sonkawade, A. C. Sharma, R. Prasad

Abstract:

In coal fired thermal power stations, large amount of fly ash is produced after burning of coal. Fly ash is spread and distributed in the surrounding area by air and may be deposited on the soil of the region surrounding the power plant. Coal contains increased levels of these radionuclides and fly ash may increase the radioactivity in the soil around the power plant. Radon atoms entering into the pore space from the mineral grain are transported by diffusion and advection through this space until they in turn decay or are released into the atmosphere. In the present study, Soil samples were collected from the region around a Kasimpur Thermal Power Plant, Kasimpur, Aligarh (U.P.). Radon activity, radon surface exhalation and mass exhalation rates were measured using “sealed can technique” using LR 115-type II nuclear track detectors. Radon activities vary from 92.9 to 556.8 Bq m-3 with mean value of 279.8 Bq m-3. Surface exhalation rates (EX) in these samples are found to vary from 33.4 to 200.2 mBq m-2 h-1 with an average value of 100.5 mBq m-2 h-1 whereas, Mass exhalation rates (EM) vary from 1.2 to 7.7 mBq kg-1 h-1 with an average value of 3.8 mBq kg-1 h-1. Activity concentrations of radionuclides were measured in these samples by using a low level NaI (Tl) based gamma ray spectrometer. Activity concentrations of 226Ra 232Th and 40K vary from 12 to 49 Bq kg-1, 24 to 49 Bq kg-1 and 135 to 546 Bq kg-1 with overall mean values of 30.3 Bq kg-1, 38.5 Bq kg-1 and 317.8 Bq kg-1, respectively. Radium equivalent activity has been found to vary from 80.0 to 143.7 Bq kg-1 with an average value of 109.7 Bq kg-1. Absorbed dose rate varies from 36.1 to 66.4 nGy h-1 with an average value of 50.4 nGy h-1 and corresponding outdoor annual effective dose varies from 0.044 to 0.081 mSv with an average value of 0.061 mSv. Values of external and internal hazard index Hex, Hin in this study vary from 0.21 to 0.38 and 0.27 to 0.50 with an average value of 0.29 and 0.37, Respectively. The results will be discussed in light of various factors.

Keywords: natural radioactivity, radium equivalent activity, absorbed dose rate, gamma ray spectroscopy

Procedia PDF Downloads 344
928 Synthesis of a Hybrid of PEG-b-PCL and G1-PEA Dendrimer Based Six-Armed Star Polymer for Nano Delivery of Vancomycin

Authors: Calvin A. Omolo, Rahul S. Kalhapure, Mahantesh Jadhav, Sanjeev Rambharose, Chunderika Mocktar, Thirumala Govender

Abstract:

Treatment of infections is compromised by limitations of conventional dosage forms and drug resistance. Nanocarrier system is a strategy to overcome these challenges and improve therapy. Thus, the development of novel materials for drug delivery via nanocarriers is essential. The aim of the study was to synthesize a multi-arm polymer (6-mPEPEA) for enhanced activity of vancomycin (VM) against susceptible and resistant Staphylococcus aureus (MRSA). The synthesis steps of the star polymer followed reported procedures. The synthesized 6-mPEPEA was characterized by FTIR, ¹H and ¹³CNMR and MTT assays. VM loaded micelles were prepared from 6-mPEPEA and characterized for size, polydispersity index (PI) and surface charge (ZP) (Dynamic Light Scattering), morphology by TEM, drug loading (UV Spectrophotometry), drug release (dialysis bag), in vitro and in vivo efficacy against sensitive and resistant S. aureus. 6-mPEPEA was synthesized, and its structure was confirmed. MTT assays confirmed its nontoxic nature with a high cell viability (77%-85%). Unimolecular spherical micelles were prepared. Size, PI, and ZP was 52.48 ± 2.6 nm, 0.103 ± 0.047, -7.3 ± 1.3 mV, respectively and drug loading was 62.24 ± 3.8%. There was a 91% drug release from VCM-6-mPEPEA after 72 hours. In vitro antibacterial test revealed that VM-6-mPEPEA had 8 and 16-fold greater activity against S. aureus and MRSA when compared to bare VM. Further investigations using flow cytometry showed that VM-6-mPEPEA had 99.5% killing rate of MRSA at the MIC concentration. In vivo antibacterial activity revealed that treatment with VM-6-mPEPEA had a 190 and a 15-fold reduction in the MRSA load in untreated and VM treated respectively. These findings confirmed the potential of 6-mPEPEA as a promising bio-degradable nanocarrier for antibiotic delivery to improve treatment of bacterial infections.

Keywords: biosafe, MRSA, nanocarrier, resistance, unimolecular-micelles

Procedia PDF Downloads 166
927 Moving Target Defense against Various Attack Models in Time Sensitive Networks

Authors: Johannes Günther

Abstract:

Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.

Keywords: network security, time sensitive networking, moving target defense, cyber security

Procedia PDF Downloads 55
926 PLGA Nanoparticles Entrapping dual anti-TB drugs of Amikacin and Moxifloxacin as a Potential Host-Directed Therapy for Multidrug Resistant Tuberculosis

Authors: Sharif Abdelghany

Abstract:

Polymeric nanoparticles have been widely investigated as a controlled release drug delivery platform for the treatment of tuberculosis (TB). These nanoparticles were also readily internalised into macrophages, leading to high intracellular drug concentration. In this study two anti-TB drugs, amikacin and moxifloxacin were encapsulated into PLGA nanoparticles. The novelty of this work appears in: (1) the efficient encapsulation of two hydrophilic second-line anti-TB drugs, and (2) intramacrophage delivery of this synergistic combination potentially for rapid treatment of multi-drug resistant TB (MDR-TB). Two water-oil-water (w/o/w) emulsion strategies were employed in this study: (1) alginate coated PLGA nanoparticles, and (2) alginate entrapped PLGA nanoparticles. The average particle size and polydispersity index (PDI) of the alginate coated PLGA nanoparticles were found to be unfavourably high with values of 640 ± 32 nm and 0.63 ± 0.09, respectively. In contrast, the alginate entrapped PLGA nanoparticles were within the desirable particle size range of 282 - 315 nm and the PDI was 0.08 - 0.16, and therefore were chosen for subsequent studies. Alginate entrapped PLGA nanoparticles yielded a drug loading of over 10 µg/mg powder for amikacin, and more than 5 µg/mg for moxifloxacin and entrapment efficiencies range of approximately 25-31% for moxifloxacin and 51-59% for amikacin. To study macrophage uptake efficiency, the nanoparticles of alginate entrapped nanoparticle formulation were loaded with acridine orange as a marker, seeded to THP-1 derived macrophages and viewed under confocal microscopy. The particles were readily internalised into the macrophages and highly concentrated in the nucleus region. Furthermore, the anti-mycobacterial activity of the drug-loaded particles was evaluated using M. tuberculosis-infected macrophages, which revealed a significant reduction (4 log reduction) of viable bacterial count compared to the untreated group. In conclusion, the amikacin-moxifloxacin alginate entrapped PLGA nanoparticles are promising for further in vivo studies.

Keywords: moxifloxacin and amikacin, nanoparticles, multidrug resistant TB, PLGA

Procedia PDF Downloads 348
925 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 208
924 Counter-Terrorism and De-Radicalization as Soft Strategies in Combating Terrorism in Indonesia: A Critical Review

Authors: Tjipta Lesmana

Abstract:

Terrorist attacks quickly penetrated Indonesia following the downfall of Soeharto regime in May 1998. Reform era was officially proclaimed. Indonesia turned to 'heaven state' from 'authoritarian state'. For the first time since 1966, the country experienced a full-scale freedom of expression, including freedom of the press, and heavy acknowledgement of human rights practice. Some religious extremists previously run away to neighbor countries to escape from security apparatus secretly backed home. Quickly they consolidated the power to continue their long aspiration and dream to establish 'Shariah Indonesia', Indonesia based on Khilafah ideology. Bali bombings I which shocked world community occurred on 12 October 2002 in the famous tourist district of Kuta on the Indonesian island of Bali, killing 202 people (including 88 Australians, 38 Indonesians, and people from more than 20 other nationalities). In the capital, Jakarta, successive bombings were blasted in Marriott hotel, Australian Embassy, residence of the Philippine Ambassador and stock exchange office. A 'drunken Indonesia' is far from ready to combat nationwide sudden and massive terrorist attacks. Police Detachment 88 (Densus 88) Indonesian counter-terrorism squad, was quickly formed following 2002 Bali Bombing. Anti-terrorism Provisional Act was immediately erected, as well, due to urgent need to fight terrorism. Some Bali bombings criminals were deadly executed after sentenced by the court. But a series of terrorist suicide attacks and another Bali bombings (the second one) in Bali, again, shocked world community. Terrorism network is undoubtedly spreading nationwide. Suspicion is high that they had close connection with Al Qaeda’s groups. Even 'Afghanistan alumni' and 'Syria alumni' returned to Indonesia to back up the local mujahidins in their fights to topple Indonesia constitutional government and set up Islamic state (Khilafah). Supported by massive aids from friendly nations, especially Australia and United States, Indonesia launched large scale operations to crush terrorism consisted of various radical groups such as JAD, JAS, and JAADI. Huge energy, money, and souls were dedicated. Terrorism is, however, persistently entrenched. High ranking officials from Detachment 88 squad and military intelligence believe that terrorism is still one the most deadly enemy of Indonesia.

Keywords: counter-radicalization, de-radicalization, Khalifah, Union State, Al Qaedah, ISIS

Procedia PDF Downloads 160
923 Studies of Heavy Metal Ions Removal Efficiency in the Presence of Anionic Surfactant Using Ion Exchangers

Authors: Anna Wolowicz, Katarzyna Staszak, Zbigniew Hubicki

Abstract:

Nowadays heavy metal ions as well as surfactants are widely used throughout the world due to their useful properties. The consequence of such widespread use is their significant production. On the other hand, the increasing demand for surfactants and heavy metal ions results in production of large amounts of wastewaters which are discharged to the environment from mining, metal plating, pharmaceutical, cosmetic, fertilizer, paper, pesticide and electronic industries, pigments producing, petroleum refining and from autocatalyst, fibers, food, polymer industries etc. Heavy metal ions are non-biodegradable in the environment, cable of accumulation in living organisms and organs, toxic and carcinogenic. On the other hand, not only heavy metal ions but also surfactants affect the purity of water and soils. Some of surfactants are also toxic, harmful and dangerous because they are able to penetrate into surface waters causing foaming, blocked diffusion of oxygen from the atmosphere and act as emulsifiers of hydrophobic substances and increase solubility of many the dangerous pollutants. Among surfactants the anionic ones dominate and their share in the global production of surfactants is around 50 ÷ 60%. Due to the negative impact of heavy metals and surfactants on aquatic ecosystems and living organisms, removal and monitoring of their concentration in the environment is extremely important. Surfactants and heavy metal ions removal can be achieved by different biological and physicochemical methods. The adsorption as well as the ion-exchange methods play here a significant role. The aim of this study was heavy metal ions removal from aqueous solutions using different types of ion exchangers in the presence of anionic surfactants. Preliminary studies of copper(II), nickel(II), zinc(II) and cobalt(II) removal from acidic solutions using ion exchangers (Lewatit MonoPlus TP 220, Lewatit MonoPlus SR 7, Purolite A 400 TL, Purolite A 830, Purolite S 984, Dowex PSR 2, Dowex PSR3, Lewatit AF-5) allowed to select the most effective ones for the above mentioned sorbates and then to checking their removal efficiency in the presence of anionic surfactants. As it was found out Lewatit MonoPlus TP 220 of the chelating type, show the highest sorption capacities for copper(II) ions in comparison with the other ion exchangers under discussion, e.g. 9.98 mg/g (0.1 M HCl); 9.12 mg/g (6 M HCl). Moreover, cobalt(II) removal efficiency was the highest in 0.1 M HCl using also Lewatit MonoPlus TP 220 (6.9 mg/g) similar to zinc(II) (9.1 mg/g) and nickiel(II) (6.2 mg/g). As the anionic surfactant sodium dodecyl sulphate (SDS) was used and surfactant parameters such as viscosity (η), density (ρ) and critical micelle concentration (CMC) were obtained: η = 1.13 ± 0,01 mPa·s; ρ = 999.76 mg/cm3; CMC = 2.26 g/cm3. The studies of copper(II) removal from acidic solutions in the presence of SDS of different concentration show negligible effects on copper(II) removal efficiency. The sorption capacity of Cu(II) from 0.1 M acidic solution of 500 mg/L initial concentration was equal to 46.8 mg/g whereas in the presence of SDS 45.3 mg/g (0.1 mg SDS/L), 47.1 mg/g (0.5 mg SDS/L), 46.6 mg/g (1 mg SDS/L).

Keywords: anionic surfactant, heavy metal ions, ion exchanger, removal

Procedia PDF Downloads 125
922 Vibro-Acoustic Modulation for Crack Detection in Windmill Blades

Authors: Abdullah Alnutayfat, Alexander Sutin

Abstract:

One of the most important types of renewable energy resources is wind energy which can be produced by wind turbines. The blades of the wind turbine are exposed to the pressure of the harsh environment, which causes a significant issue for the wind power industry in terms of the maintenance cost and failure of blades. One of the reliable methods for blade inspection is the vibroacoustic structural health monitoring (SHM) method which examines information obtained from the structural vibrations of the blade. However, all vibroacoustic SHM techniques are based on comparing the structural vibration of intact and damaged structures, which places a practical limit on their use. Methods for nonlinear vibroacoustic SHM are more sensitive to damage and cracking and do not need to be compared to data from the intact structure. This paper presents the Vibro-Acoustic Modulation (VAM) method based on the modulation of high-frequency (probe wave) by low-frequency loads (pump wave) produced by the blade rotation. The blade rotation alternates bending stress due to gravity, leading to crack size variations and variations in the blade resonance frequency. This method can be used with the classical SHM vibration method in which the blade is excited by piezoceramic actuator patches bonded to the blade and receives the vibration response from another piezoceramic sensor. The VAM modification of this method analyzes the spectra of the detected signal and their sideband components. We suggest the VAM model as the simple mechanical oscillator, where the parameters of the oscillator (resonance frequency and damping) are varied due to low-frequency blade rotation. This model uses the blade vibration parameters and crack influence on the blade resonance properties from previous research papers to predict the modulation index (MI).

Keywords: wind turbine blades, damaged detection, vibro-acoustic structural health monitoring, vibro-acoustic modulation

Procedia PDF Downloads 67
921 Optimism, Skepticism, and Uncertainty: A Qualitative Study on the Knowledge and Perceived Impact of the Affordable Care Act among Adult Patients Seeking Care in a Free Clinic

Authors: Mike Wei, Mario Cedillo, Jiahui Lin, Carol Lorraine Storey-Johnson, Carla Boutin-Foster

Abstract:

Purpose: The extent to which health insurance enrollment succeeds under the Affordable Care Act (ACA) rests heavily on the ability to reach the uninsured and motivate them to enroll. We sought to identify perceptions about the ACA among uninsured patients at a free clinic in New York City. Background: The ACA holds tremendous promise for reducing the number of uninsured Americans. As of April 2014, nearly 8 million people had signed up for health insurance through the Health Insurance Marketplace. Despite this early success, future and continued enrollment rests heavily on the degree of public awareness. Reaching eligible individuals and increasing their awareness and understanding remains a fundamental challenge to realizing the full potential of the ACA. Reaching out to uninsured patients who are seeking care through safety net facilities such as free clinics may provide important avenues for reaching potential enrollees. This project focuses on the experience at the free clinic at Weill Cornell Medical College, the Weill Cornell Community Clinic (WCCC), and seeks to understand perceptions about the ACA among its patient population. Methods: This was a cross-sectional study of all patients who visited the free clinic at Weill Cornell Medical College, the Weill Cornell Community Clinic, from July 2013 to May 2014. Patients who provided informed consent at their visit and completed a semi-structured questionnaire were included (N=62). The questionnaire comprised of questions about demographic characteristics and open-ended questions about their knowledge and perception of the impact of the ACA. Descriptive statistics were used to characterize the population demographics. Qualitative coding techniques were used for open-ended items. Results: Approximately one third of patients surveyed never had health insurance. Of the remaining 65%, 20% lost their insurance within the past year. Only 55% had heard about the ACA, and only 10% knew about the Health Benefits Exchange. Of those who had heard about the ACA, sentiments were tinged with optimistic misperceptions, such as “it will be free health care for all.” While optimistic, most of the responses focused on the economic implications of the ACA. Conclusions: These findings reveal the immense amount of misconception and lack of understanding with regards to the ACA. As such, the study highlights the need to educate and address the concerns of those who remain skeptical or uncertain about the implications of the ACA.

Keywords: Affordable Care Act, demographics, free clinics, underserved.

Procedia PDF Downloads 370
920 Advancing Trustworthy Human-robot Collaboration: Challenges and Opportunities in Diverse European Industrial Settings

Authors: Margarida Porfírio Tomás, Paula Pereira, José Manuel Palma Oliveira

Abstract:

The decline in employment rates across sectors like industry and construction is exacerbated by an aging workforce. This has far-reaching implications for the economy, including skills gaps, labour shortages, productivity challenges due to physical limitations, and workplace safety concerns. To sustain the workforce and pension systems, technology plays a pivotal role. Robots provide valuable support to human workers, and effective human-robot interaction is essential. FORTIS, a Horizon project, aims to address these challenges by creating a comprehensive Human-Robot Interaction (HRI) solution. This solution focuses on multi-modal communication and multi-aspect interaction, with a primary goal of maintaining a human-centric approach. By meeting the needs of both human workers and robots, FORTIS aims to facilitate efficient and safe collaboration. The project encompasses three key activities: 1) A Human-Centric Approach involving data collection, annotation, understanding human behavioural cognition, and contextual human-robot information exchange. 2) A Robotic-Centric Focus addressing the unique requirements of robots during the perception and evaluation of human behaviour. 3) Ensuring Human-Robot Trustworthiness through measures such as human-robot digital twins, safety protocols, and resource allocation. Factor Social, a project partner, will analyse psycho-physiological signals that influence human factors, particularly in hazardous working conditions. The analysis will be conducted using a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. However, the adoption of novel technologies, particularly those involving human-robot interaction, often faces hurdles related to acceptance. To address this challenge, FORTIS will draw upon insights from Social Sciences and Humanities (SSH), including risk perception and technology acceptance models. Throughout its lifecycle, FORTIS will uphold a human-centric approach, leveraging SSH methodologies to inform the design and development of solutions. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No 101135707 (FORTIS).

Keywords: skills gaps, productivity challenges, workplace safety, human-robot interaction, human-centric approach, social sciences and humanities, risk perception

Procedia PDF Downloads 32
919 Soluble CD36 and Cardiovascular Risk in Middle-Aged Subjects

Authors: Mohammad Alkhatatbeh, Nehad Ayoub, Nizar Mhaidat, Nesreen Saadeh, Lisa Lincz

Abstract:

CD36 is involved in the development of atherosclerosis by enhancing macrophage endocytosis of oxidized-low density lipoproteins and foam cell formation. Soluble CD36 (sCD36) was found to be elevated in type 2 diabetic patients and was supposed to act as a marker of insulin resistance and atherosclerosis. In young subjects, sCD36 was associated with cardiovascular risk factors including obesity and hypertriglyceridemia. This study was conducted to further investigate the relationship between plasma sCD36 and cardiovascular risk factors among middle-aged patients with metabolic syndrome (MetS) and healthy controls. SCD36 concentrations were determined by enzyme-linked immunosorbent assays (ELISA) for 41 patients with MetS and 36 healthy controls. Data for other variables were obtained from patients' medical records. SCD36 concentrations were relatively low compared to most other studies and were not significantly different between the MetS group and controls (P-value=0.17). SCD36 was also not correlated with age, body mass index, glucose, lipid profile, serum electrolytes and blood counts. SCD36 was not significantly different between subjects with obesity, hyperglycemia, dyslipidemia, hypertension or cardiovascular disease and those without these abnormalities (P-value > 0.05). The inconsistency between results reported in this study and other studies may be unique to the study population or be a result of the lack of a reliable standardized method for determining absolute sCD36 concentrations. However, further investigations are required to assess CD36 tissue expression in the study population and to assess the accuracy of various commercially available sCD36 ELISA kits. Thus, the availability of a standardized simple sCD36 ELISA that could be performed in any basic laboratory would be more favorable to the specialized flow cytometry methods that detect CD36+ microparticles if it was to be used as a biomarker.

Keywords: metabolic syndrome, CD36, cardiovascular risk, obesity, type 2 diabetes mellitus

Procedia PDF Downloads 251
918 Developing Index of Democratic Institutions' Vulnerability

Authors: Kamil Jonski

Abstract:

Last year vividly demonstrated, that populism and political instability can endanger democratic institutions in countries regarded as democratic transition champions (Poland) or cornerstones of liberal order (UK, US). So called ‘illiberal democracy’ is winning hearts and minds of voters, keen to believe that rule of strongman is a viable alternative to perceived decay of western values and institutions. These developments pose a serious threat to the democratic institutions (including rule of law), proven critical for both personal freedom and economic development. Although scholars proposed some structural explanations of the illiberal wave (notably focusing on inequality, stagnant incomes and drawbacks of globalization), they seem to have little predictive value. Indeed, events like Trump’s victory, Brexit or Polish shift towards populist nationalism always came as a surprise. Intriguingly, in the case of US election, simple rules like ‘Bread and Peace model’ gauged prospects of Trump’s victory better than pundits and pollsters. This paper attempts to compile set of indicators, in order to gauge various democracies’ vulnerability to populism, instability and pursuance of ‘illiberal’ projects. Among them, it identifies the gap between consensus assessment of institutional performance (as measured by WGI indicators) and citizens’ subjective assessment (survey based confidence in institutions). Plotting these variables against each other, reveals three clusters of countries – ‘predictable’ (good institutions and high confidence, poor institutions and low confidence), ‘blind’ (poor institutions, high confidence e.g. Uzbekistan or Azerbaijan) and ‘disillusioned’ (good institutions, low confidence e.g. Spain, Chile, Poland and US). It seems that this clustering – carried out separately for various institutions (like legislature, executive and courts) and blended with economic indicators like inequality and living standards (using PCA) – offers reasonably good watchlist of countries, that should ‘expect the unexpected’.

Keywords: illiberal democracy, populism, political instability, political risk measurement

Procedia PDF Downloads 189
917 Factors That Influence Choice of Walking Mode in Work Trips: Case Study of Rasht, Iran

Authors: Nima Safaei, Arezoo Masoud, Babak Safaei

Abstract:

In recent years, there has been a growing emphasis on the role of urban planning in walking capability and the effects of individual and socioeconomic factors on the physical activity levels of city dwellers. Although considerable number of studies are conducted about walkability and for identifying the effective factors in walking mode choice in developed countries, to our best knowledge, literature lacks in the study of factors affecting choice of walking mode in developing countries. Due to the high importance of health aspects of human societies and in order to make insights and incentives for reducing traffic during rush hours, many researchers and policy makers in the field of transportation planning have devoted much attention to walkability studies; they have tried to improve the effective factors in the choice of walking mode in city neighborhoods. In this study, effective factors in walkability that have proven to have significant impact on the choice of walking mode, are studied at the same time in work trips. The data for the study is collected from the employees in their workplaces by well-instructed people using questionnaires; the statistical population of the study consists of 117 employed people who commute daily from work to home in Rasht city of Iran during the beginning of spring 2015. Results of the study which are found through the linear regression modeling, show that people who do not have freedom of choice for choosing their living locations and need to be present at their workplaces in certain hours have lower levels of walking. Additionally, unlike some of the previous studies which were conducted in developed countries, coincidental effects of Body Mass Index (BMI) and the income level of employees, do not have a significant effect on the walking level in work travels.

Keywords: BMI, linear regression, transportation, walking, work trips

Procedia PDF Downloads 173
916 Experimental and Theoretical Studies: Biochemical Properties of Honey on Type 2 Diabetes

Authors: Said Ghalem

Abstract:

Honey is primarily composed of sugars: glucose and fructose. Depending honey, it's either fructose or glucose predominates. More the fructose concentration and the less the glycemic index (GI) is high. Thus, changes in the insulin response shows a decrease of the amount of insulin secreted at an increased fructose honey. Honey is also a compound that can reduce the lipid in the blood. Several studies on animals, but which remain to be checked in humans, have shown that the honey can have interesting effects when combined with other molecules: associated with Metformin (a medicine taken by diabetics), it shows the benefits and effects of diabetes preserves the tissue; associated ginger, it increases the antioxidant activity and thus avoids neurologic complications, neuropathic. Molecular modeling techniques are widely used in chemistry, biology, and the pharmaceutical industry. Most of the currently existing drugs target enzymes. Inhibition of DPP-4 is an important approach in the treatment of type 2 diabetes. We have chosen for the inhibition of DPP-4 the following molecules: Linagliptin (BI1356), Sitagliptin (Januvia), Vildagliptin, Saxagliptin, Alogliptin, and Metformin (Glucophage), that are involved in the disease management of type 2 diabetes and added to honey. For this, we used software Molecular Operating Environment. A Wistar rat study was initiated in our laboratory with a well-studied protocol; after sacrifice, according to international standards and respect for the animal This theoretical approach predicts the mode of interaction of a ligand with its target. The honey can have interesting effects when combined with other molecules, it shows the benefits and effects of honey preserves the tissue, it increases the antioxidant activity, and thus avoids neurologic complications, neuropathic or macrovascular. The organs, especially the kidneys of Wistar, shows that the parameters to renal function let us conclude that damages caused by diabetes are slightly perceptible than those observed without the addition of a high concentration of fructose honey.

Keywords: honey, molecular modeling, DPP4 enzyme, metformin

Procedia PDF Downloads 79
915 Calculating Asphaltenes Precipitation Onset Pressure by Using Cardanol as Precipitation Inhibitor: A Strategy to Increment the Oil Well Production

Authors: Camilo A. Guerrero-Martin, Erik Montes Paez, Marcia C. K. Oliveira, Jonathan Campos, Elizabete F. Lucas

Abstract:

Asphaltenes precipitation is considered as a formation damage problem, which can reduce the oil recovery factor. It fouls piping and surface installations, as well as cause serious flow assurance complications and decline oil well production. Therefore, researchers have shown an interest in chemical treatments to control this phenomenon. The aim of this paper is to assess the asphaltenes precipitation onset of crude oils in the presence of cardanol, by titrating the crude with n-heptane. Moreover, based on this results obtained at atmosphere pressure, the asphaltenes precipitation onset pressure were calculated to predict asphaltenes precipitation in the reservoir, by using differential liberation and refractive index data of the oils. The influence of cardanol concentrations in the asphaltenes stabilization of three Brazilian crude oils samples (with similar API densities) was studied. Therefore, four formulations of cardanol in toluene were prepared: 0, 3, 5, 10 and 15 m/m%. The formulations were added to the crude at 2:98 ratio. The petroleum samples were characterized by API density, elemental analysis and differential liberation test. The asphaltenes precipitation onset (APO) was determined by titrating with n-heptane and monitoring with near-infrared (NIR). UV-Vis spectroscopy experiments were also done to assess the precipitate asphaltenes content. The asphaltenes precipitation envelopes (APE) were also determined by numerical simulation (Multiflash). In addition, the adequate artificial lift systems (ALS) for the oils were selected. It was based on the downhole well profile and a screening methodology. Finally, the oil flowrates were modelling by NODAL analysis production system in the PIPESIM software. The results of this study show that the asphaltenes precipitation onset of the crude oils were 2.2, 2.3 and 6.0 mL of n-heptane/g of oil. The cardanol was an effective inhibitor of asphaltenes precipitation for the crude oils used in this study, since it displaces the precipitation pressure of the oil to lower values. This indicates that cardanol can increase the oil wells productivity.

Keywords: asphaltenes, NODAL analysis production system, precipitation pressure onset, inhibitory molecule

Procedia PDF Downloads 158
914 Analysis of Landscape Pattern Evolution in Banan District, Chongqing, Based on GIS and FRAGSTATS

Authors: Wenyang Wan

Abstract:

The study of urban land use and landscape pattern is the current hotspot in the fields of planning and design, ecology, etc., which is of great significance for the construction of the overall humanistic ecosystem of the city and optimization of the urban spatial structure. Banan District, as the main part of the eastern eco-city planning of Chongqing Municipality, is a new high ground for highlighting the ecological characteristics of Chongqing, realizing effective transformation of ecological value, and promoting the integrated development of urban and rural areas. The analytical methods of land use transfer matrix (GIS) and landscape pattern index (Fragstats) were used to study the characteristics and laws of the evolution of land use landscape pattern in Banan District from 2000 to 2020, which provide some reference value for Banan District to alleviate the ecological contradiction of landscape. The results of the study show that: ① Banan District is rich in land use types, of which the area of cultivated land will still account for 57.15% of the total area of the landscape until 2020, accounting for an absolute advantage in the land use structure of Banan District; ② From 2000 to 2020, land use conversion in Banan District is characterized as: Cropland > woodland > grassland > shrubland > built-up land > water bodies > wetlands, with cropland converted to built-up land being the largest; ③ From 2000 to 2020, the landscape elements of Banan District were distributed in a balanced way, and the landscape types were rich and diversified, but due to the influence of human interference, it also presented the characteristics that the shape of the landscape elements tended to be irregular, and the dominant patches were distributed in a scattered manner, and the patches had poor connectivity. It is recommended that in future regional ecological construction, the layout should be rationally optimized, the relationship between landscape components should be coordinated, and the connectivity between landscape patches should be strengthened, and the degree of landscape fragmentation should be reduced.

Keywords: land use transfer, landscape pattern evolution, GIS and FRAGSTATS, Banan District

Procedia PDF Downloads 64
913 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 76
912 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 291
911 Evaluation of the Impact of Community Based Disaster Risk Management Applied In Landslide Prone Area; Reference to Badulla District

Authors: S. B. D. Samarasinghe, Malini Herath

Abstract:

Participatory planning is a very important process for decision making and choosing the best alternative options for community welfare, development of the society and its interactions among community and professionals. People’s involvement is considered as the key guidance in participatory planning. Presently, Participatory planning is being used in many fields. It's not only limited to planning but also to disaster management, poverty, housing, etc. In the past, Disaster management practice was a top-down approach, but it raised many issues as it was converted to a bottom-up approach. There are several approaches that can aid disaster management. Community-Based Disaster Risk Management (CBDRM) is a very successful participatory approach to risk management that is often successfully applied by other disaster-prone countries. In the local context, CBDRM has been applied to prevent Diseases as well as to prevent disasters such as landslides, tsunamis and floods. From three years before, Sri Lanka has initiated the CBDRM approach to minimize landslide vulnerability. Hence, this study mainly focuses on the impact of CBDRM approaches on landslide hazards. Also to identify their successes and failures from both implementing parties and community. This research is carried out based on a qualitative method combined with a descriptive research approach. A successful framework was prepared via a literature review. Case studies were selected considering landslide CBDRM programs which were implemented by Disaster Management Center and National Building Research Organization in Badulla. Their processes were evaluated. Data collection is done through interviews and informal discussions. Then their ideas were quantified by using the Relative Effectiveness index. The resulting numerical value was used to rank the program effectiveness and their success, failures and impacting factors. Results show that there are several failures among implementing parties and the community. Overcoming those factors can make way for better conduction of future CBDRM programs.

Keywords: community-based disaster risk management, disaster management, preparedness, landslide

Procedia PDF Downloads 123
910 Reduction of Plants Biodiversity in Hyrcanian Forest by Coal Mining Activities

Authors: Mahsa Tavakoli, Seyed Mohammad Hojjati, Yahya Kooch

Abstract:

Considering that coal mining is one of the important industrial activities, it may cause damages to environment. According to the author’s best knowledge, the effect of traditional coal mining activities on plant biodiversity has not been investigated in the Hyrcanian forests. Therefore, in this study, the effect of coal mining activities on vegetation and tree diversity was investigated in Hyrcanian forest, North Iran. After filed visiting and determining the mine, 16 plots (20×20 m2) were established by systematic-randomly (60×60 m2) in an area of 4 ha (200×200 m2-mine entrance placed at center). An area adjacent to the mine was not affected by the mining activity, and it is considered as the control area. In each plot, the data about trees such as number and type of species were recorded. The biodiversity of vegetation cover was considered 5 square sub-plots (1 m2) in each plot. PAST software and Ecological Methodology were used to calculate Biodiversity indices. The value of Shannon Wiener and Simpson diversity indices for tree cover in control area (1.04±0.34 and 0.62±0.20) was significantly higher than mining area (0.78±0.27 and 0.45±0.14). The value of evenness indices for tree cover in the mining area was significantly lower than that of the control area. The value of Shannon Wiener and Simpson diversity indices for vegetation cover in the control area (1.37±0.06 and 0.69±0.02) was significantly higher than the mining area (1.02±0.13 and 0.50±0.07). The value of evenness index in the control area was significantly higher than the mining area. Plant communities are a good indicator of the changes in the site. Study about changes in vegetation biodiversity and plant dynamics in the degraded land can provide necessary information for forest management and reforestation of these areas.

Keywords: vegetation biodiversity, species composition, traditional coal mining, Caspian forest

Procedia PDF Downloads 164
909 Minimal Invasive Esophagectomy for Esophageal Cancer: An Institutional Review From a Dedicated Centre of Pakistan

Authors: Nighat Bakhtiar, Ali Raza Khan, Shahid Khan Khattak, Aamir Ali Syed

Abstract:

Introduction: Chemoradiation followed by resection has been the standard therapy for resectable (cT1-4aN0-3M0) esophageal carcinoma. The optimal surgical approach remains a matter of debate. Therefore, the purpose of this study was to share our experiences of minimal invasive esophagectomies concerning morbidity, mortality and oncological quality. This study aims to enlighten the world about the surgical outcomes after minimally invasive esophagectomy at Shaukat Khanum Hospital Lahore. Objective: The purpose of this study is to review an institutional experience of Surgical outcomes of Minimal Invasive esophagectomies for esophageal cancer. Methodology: This retrospective study was performed after ethical approval at Shaukat Khanum Memorial Cancer Hospital and Research Centre (SKMCH&RC) Pakistan. Patients who underwent Minimal Invasive esophagectomies for esophageal cancer from March 2018 to March 2023 were selected. Data was collected through the human information system (HIS) electronic database of SKMCH&RC. Data was described using mean and median with minimum and maximum values for quantitative variables. For categorical variables, a number of observations and percentages were reported. Results: A total of 621 patients were included in the study, with the mean age of the patient was 39 years, ranging between 18-58 years. Mean Body Mass Index of patients was 21.2.1±4.1. Neo-adjuvant chemoradiotherapy was given to all patients. The mean operative time was 210.36 ± 64.51 minutes, and the mean blood loss was 121 milliliters. There was one mortality in 90 days, while the mean postoperative hospital stay was 6.58 days with a 4.64 standard deviation. The anastomotic leak rate was 4.2%. Chyle leak was observed in 12 patients. Conclusion: The minimal invasive technique is a safe approach for esophageal cancers, with minimal complications and fast recovery.

Keywords: minimal invasive, esophagectomy, laparscopic, cancer

Procedia PDF Downloads 50
908 Delivery of Ginseng Extract Containing Phytosome Loaded Microsphere System: A Preclinical Approach for Treatment of Neuropathic Pain in Rodent Model

Authors: Nitin Kumar

Abstract:

Purpose: The current research work focuses mainly on evolving a delivery system for ginseng extract (GE), which in turn will ameliorate the neuroprotective potential by means of enhancing the ginsenoside (Rb1) bio-availability (BA). For more noteworthy enhancement in oral bioavailability (OBA) along with pharmacological properties, the drug carriers’ performance can be strengthened by utilizing phytosomes-loaded microspheres (PM) delivery system. Methods: For preparing the disparate phytosome complexes (F1, F2, and F3), an aqueous extract of ginseng roots (GR) along with phospholipids were reacted in disparate ratio. Considering the outcomes, F3 formulation (spray-dried) was chosen for preparing the phytosomes powder (PP), PM, and extract microspheres (EM). PM was made by means of loading of F3 into Gum Arabic (GA) in addition to maltodextrin polymer mixture, whereas EM was prepared by means of the addition of extract directly into the same polymer mixture. For investigating the neuroprotective effect (NPE) in addition to their pharmacokinetic (PK) properties, PP, PM, and EM formulations were assessed. Results: F3 formulation gave enhanced entrapment efficiency (EE) (i.e., 50.61%) along with good homogeneity of spherical shaped particle size (PS) (42.58 ± 1.4 nm) with least polydispersity index (PDI) (i.e., 0.193 ± 0.01). The sustained release (up to 24 h) of ginsenoside Rb1 (GRb1) is revealed by the dissolution study of PM. A significantly (p < 0.05) greater anti-oxidant (AO) potential of PM can well be perceived as of the diminution in the lipid peroxidase level in addition to the rise in the glutathione superoxide dismutase (SOD) in addition to catalase levels. It also showed a greater neuroprotective potential exhibiting significant (p < 0.05) augmentation in the nociceptive threshold together with the diminution in damage to nerves. A noteworthy enhancement in the relative BA (157.94%) of GRb1 through the PM formulation can well be seen in the PK studies. Conclusion: It is exhibited that the PM system is an optimistic and feasible strategy to enhance the delivery of GE for the effectual treatment of neuropathic pain.

Keywords: ginseng, neuropathic, phytosome, pain

Procedia PDF Downloads 174
907 Role of Intralesional Tranexamic Acid in Comparison of Oral Tranexamic Acid in the Treatment of Melasma

Authors: Lubna Khondker

Abstract:

Background: Melasma is a common pigmentary dermatosis, manifested by hyperpigmented macules or patches on the face, commonly occurring in females due to an acquired disorder in the melanogenesis process. Although several treatments are currently used, it remains a great challenge due to recurrence and refractory nature. It was recently reported that tranexamic acid (TA-plasmin inhibitor) is an effective treatment for melasma. Objective: This study aims to compare the efficacy and side effects of intralesional injection of Tranexamic acid with oral Tranexamic acid in the treatment of melasma. Methods: A clinical trial was done in the Department of Dermatology and Venereology, Bangabandhu Sheikh Mujib Medical University, for a period of 4 years. A total of 100 patients with melasma who did not respond to topical therapy were included in the study as group A and group B. Group A Patients were administered intralesional injection (10 mg/ml) of Tranexamic acid( TA) weekly for 6 weeks, and group B patients were treated with oral tranexamic acid 250 mg 12 hourly for 12 weeks after taking informed consent. The severity and extent of pigmentation were assessed by the modified melasma area severity index (MASI). The response to treatment was assessed by MASI at 4 weeks, 8 weeks, and 12 weeks after stopping treatment. Results: The study showed the MASI scores at the baseline, 4 weeks, 8 weeks, and 12 weeks in group A were 18.23±1.22, 6.14±3.26, 3.21±2.14 and 2.11±2.01 respectively, and in group B, 17.87±1.12, 11.21±6.25, 6.57±4.26 and 6.41±4.17 respectively. The mean MASI significantly reduced in group A compared to group B in the 4th, 8th, and 12th weeks. The present study showed that among group A patients, 56% rated excellent (>75% reduction) in outcome, 32% good (50-75% reduction), 8% moderate (25-50% reduction) and only 4% (<25% reduction) was unsatisfactory and among group B patients, 14% rated excellent in outcome, 28% good, 36% moderate and 22% was unsatisfactory. Overall improvement in our study in group A was 96% and in group B 78%. Side effects were negligible, and all the patients tolerated the treatment well. Conclusion: Based on our results, intralesional Tranexamic acid (10 mg/ml) is more effective and safer than oral Tranexamic acid in the treatment of melasma.

Keywords: intralesional tranexamic acid, melasma, oral tranexamic acid, MASI score

Procedia PDF Downloads 40
906 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger

Authors: Mounir Baccar, Imen Jmal

Abstract:

Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.

Keywords: heat transfer enhancement, front solidification, PCM, natural convection

Procedia PDF Downloads 171
905 Curriculum Check in Industrial Design, Based on Knowledge Management in Iran Universities

Authors: Maryam Mostafaee, Hassan Sadeghi Naeini, Sara Mostowfi

Abstract:

Today’s Knowledge management (KM), plays an important role in organizations. Basically, knowledge management is in the relation of using it for taking advantage of work forces in an organization for forwarding the goals and demand of that organization used at the most. The purpose of knowledge management is not only to manage existing documentation, information, and Data through an organization, but the most important part of KM is to control most important and key factor of those information and Data. For sure it is to chase the information needed for the employees in the right time of needed to take from genuine source for bringing out the best performance and result then in this matter the performance of organization will be at most of it. There are a lot of definitions over the objective of management released. Management is the science that in force the accurate knowledge with repeating to the organization to shape it and take full advantages for reaching goals and targets in the organization to be used by employees and users, but the definition of Knowledge based on Kalinz dictionary is: Facts, emotions or experiences known by man or group of people is ‘ knowledge ‘: Based on the Merriam Webster Dictionary: the act or skill of controlling and making decision about a business, department, sport team, etc, based on the Oxford Dictionary: Efficient handling of information and resources within a commercial organization, and based on the Oxford Dictionary: The art or process of designing manufactured products: the scale is a beautiful work of industrial design. When knowledge management performed executive in universities, discovery and create a new knowledge be facilitated. Make procedures between different units for knowledge exchange. College's officials and employees understand the importance of knowledge for University's success and will make more efforts to prevent the errors. In this strategy, is explored factors and affective trends and manage of it in University. In this research, Iranian universities for a time being analyzed that over usage of knowledge management, how they are behaving and having understood this matter: 1. Discovery of knowledge management in Iranian Universities, 2. Transferring exciting knowledge between faculties and unites, 3. Participate of employees for getting and using and transferring knowledge, 4.The accessibility of valid sources, 5. Researching over factors and correct processes in the university. We are pointing in some examples that we have already analyzed which is: -Enabling better and faster decision-making, -Making it easy to find relevant information and resources, -Reusing ideas, documents, and expertise, -Avoiding redundant effort. Consequence: It is found that effectiveness of knowledge management in the Industrial design field is low. Based on filled checklist by Education officials and professors in universities, and coefficient of effectiveness Calculate, knowledge management could not get the right place.

Keywords: knowledge management, industrial design, educational curriculum, learning performance

Procedia PDF Downloads 352
904 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data

Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski

Abstract:

Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.

Keywords: hydrological changes, mountain areas, relief transformations, ski run construction

Procedia PDF Downloads 132
903 The Association of Slope Failure and Lineament Density along the Ranau-Tambunan Road, Sabah, Malaysia

Authors: Norbert Simon, Rodeano Roslee, Abdul Ghani Rafek, Goh Thian Lai, Azimah Hussein, Lee Khai Ern

Abstract:

The 54 km stretch of Ranau-Tambunan (RTM) road in Sabah is subjected to slope failures almost every year. This study is focusing on identifying section of roads that are susceptible to failure based on temporal landslide density and lineament density analyses. In addition to the analyses, the rock slopes in several sections of the road were assessed using the geological strength index (GSI) technique. The analysis involved 148 landslides that were obtained in 1978, 1994, 2009 and 2011. The landslides were digitized as points and the point density was calculated based on every 1km2 of the road. The lineaments of the area was interpreted from Landsat 7 15m panchromatic band. The lineament density was later calculated based on every 1km2 of the area using similar technique with the slope failure density calculation. The landslide and lineament densities were classified into three different classes that indicate the level of susceptibility (low, moderate, high). Subsequently, the two density maps were overlap to produce the final susceptibility map. The combination of both high susceptibility classes from these maps signifies the high potential of slope failure in those locations in the future. The final susceptibility map indicates that there are 22 sections of the road that are highly susceptible. Seven rock slopes were assessed along the RTM road using the GSI technique. It was found from the assessment that rock slopes along this road are highly fractured, weathered and can be classified into fair to poor categories. The poor condition of the rock slope can be attributed to the high lineament density that presence in the study area. Six of the rock slopes are located in the high susceptibility zones. A detailed investigation on the 22 high susceptibility sections of the RTM road should be conducted due to their higher susceptibility to failure, in order to prevent untoward incident to road users in the future.

Keywords: GSI, landslide, landslide density, landslide susceptibility, lineament density

Procedia PDF Downloads 382
902 Development of a Model for Predicting Radiological Risks in Interventional Cardiology

Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.

Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose

Procedia PDF Downloads 120
901 Sea Surface Temperature and Climatic Variables as Drivers of North Pacific Albacore Tuna Thunnus Alalunga Time Series

Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto, Swastika Roshni, Paras Nath, Alok Kalla

Abstract:

Albacore tuna (Thunnus alalunga) is one of the commercially important species of tuna in the North Pacific region. Despite the long history of albacore fisheries in the Pacific, its ecological characteristics are not sufficiently understood. The effects of changing climate on numerous commercially and ecologically important fish species including albacore tuna have been documented over the past decades. The objective of this study was to explore and elucidate the relationship of environmental variables with the stock parameters of albacore tuna. The relationship of the North Pacific albacore tuna recruitment (R), spawning stock biomass (SSB) and recruits per spawning biomass (RPS) from 1970 to 2012 with the environmental factors of sea surface temperature (SST), Pacific decadal oscillation (PDO), El Niño southern oscillation (ENSO) and Pacific warm pool index (PWI) was construed. SST and PDO were used as independent variables with SSB to construct stock reproduction models for R and RPS as they showed most significant relationship with the dependent variables. ENSO and PWI were excluded due to collinearity effects with SST and PDO. Model selections were based on R2 values, Akaike Information Criterion (AIC) and significant parameter estimates at p<0.05. Models with single independent variables of SST, PDO, ENSO and PWI were also constructed to illuminate their individual effect on albacore R and RPS. From the results it can be said that SST and PDO resulted in the most significant models for reproducing North Pacific albacore tuna R and RPS time series. SST has the highest impact on albacore R and RPS when comparing models with single environmental variables. It is important for fishery managers and decision makers to incorporate the findings into their albacore tuna management plans for the North Pacific Oceanic region.

Keywords: Albacore tuna, El Niño southern oscillation, Pacific decadal oscillation, sea surface temperature

Procedia PDF Downloads 214