Search results for: radial energy distribution
1875 Optimization of Municipal Solid Waste Management in Peshawar Using Mathematical Modelling and GIS with Focus on Incineration
Authors: Usman Jilani, Ibad Khurram, Irshad Hussain
Abstract:
Environmentally sustainable waste management is a challenging task as it involves multiple and diverse economic, environmental, technical and regulatory issues. Municipal Solid Waste Management (MSWM) is more challenging in developing countries like Pakistan due to lack of awareness, technology and human resources, insufficient funding, inefficient collection and transport mechanism resulting in the lack of a comprehensive waste management system. This work presents an overview of current MSWM practices in Peshawar, the provincial capital of Khyber Pakhtunkhwa, Pakistan and proposes a better and sustainable integrated solid waste management system with incineration (Waste to Energy) option. The diverted waste would otherwise generate revenue; minimize land fill requirement and negative impact on the environment. The proposed optimized solution utilizing scientific techniques (like mathematical modeling, optimization algorithms and GIS) as decision support tools enhances the technical & institutional efficiency leading towards a more sustainable waste management system through incorporating: - Improved collection mechanisms through optimized transportation / routing and, - Resource recovery through incineration and selection of most feasible sites for transfer stations, landfills and incineration plant. These proposed methods shift the linear waste management system towards a cyclic system and can also be used as a decision support tool by the WSSP (Water and Sanitation Services Peshawar), agency responsible for the MSWM in Peshawar.Keywords: municipal solid waste management, incineration, mathematical modeling, optimization, GIS, Peshawar
Procedia PDF Downloads 3771874 A Study on the Effect of Cod to Sulphate Ratio on Performance of Lab Scale Upflow Anaerobic Sludge Blanket Reactor
Authors: Neeraj Sahu, Ahmad Saadiq
Abstract:
Anaerobic sulphate reduction has the potential for being effective and economically viable over conventional treatment methods for the treatment of sulphate-rich wastewater. However, a major challenge in anaerobic sulphate reduction is the diversion of a fraction of organic carbon towards methane production and some minor problem such as odour problems, corrosion, and increase of effluent chemical oxygen demand. A high-rate anaerobic technology has encouraged researchers to extend its application to the treatment of complex wastewaters with relatively low cost and energy consumption compared to physicochemical methods. Therefore, the aim of this study was to investigate the effects of COD/SO₄²⁻ ratio on the performance of lab scale UASB reactor. A lab-scale upflow anaerobic sludge blanket (UASB) reactor was operated for 170 days. In which first 60 days, for successful start-up with acclimation under methanogenesis and sulphidogenesis at COD/SO₄²⁻ of 18 and were operated at COD/SO₄²⁻ ratios of 12, 8, 4 and 1 to evaluate the effects of the presence of sulfate on the reactor performance. The reactor achieved maximum COD removal efficiency and biogas evolution at the end of acclimation (control). This phase lasted 53 days with 89.5% efficiency. The biogas was 0.6 L/d at (OLR) of 1.0 kg COD/m³d when it was treating synthetic wastewater with effective volume of reactor as 2.8 L. When COD/SO₄²⁻ ratio changed from 12 to 1, slight decrease in COD removal efficiencies (76.8–87.4%) was observed, biogas production decreased from 0.58 to 0.32 L/d, while the sulfate removal efficiency increased from 42.5% to 72.7%.Keywords: anaerobic, chemical oxygen demand, organic loading rate, sulphate, up-flow anaerobic sludge blanket reactor
Procedia PDF Downloads 2191873 Electrochemical Synthesis of Copper Nanoparticles
Authors: Juan Patricio Ibáñez, Exequiel López
Abstract:
A method for synthesizing copper nanoparticles through an electrochemical approach is proposed, employing surfactants to stabilize the size of the newly formed nanoparticles. The electrolyte was made up of a matrix of H₂SO₄ (190 g/L) having Cu²⁺ (from 3.2 to 9.5 g/L), sodium dodecyl sulfate -SDS- (from 0.5 to 1.0 g/L) and Tween 80 (from 0 to 7.5 mL/L). Tween 80 was used in a molar relation of 1 to 1 with SDS. A glass cell was used, which was in a thermostatic water bath to keep the system temperature, and the electrodes were cathodic copper as an anode and stainless steel 316-L as a cathode. This process was influenced by the control exerted through the initial copper concentration in the electrolyte and the applied current density. Copper nanoparticles of electrolytic purity, exhibiting a spherical morphology of varying sizes with low dispersion, were successfully produced, contingent upon the chemical composition of the electrolyte and current density. The minimum size achieved was 3.0 nm ± 0.9 nm, with an average standard deviation of 2.2 nm throughout the entire process. The deposited copper mass ranged from 0.394 g to 1.848 g per hour (over an area of 25 cm²), accompanied by an average Faradaic efficiency of 30.8% and an average specific energy consumption of 4.4 kWh/kg. The chemical analysis of the product employed X-ray powder diffraction (XRD), while physical characteristics such as size and morphology were assessed using atomic force microscopy (AFM). It was identified that the initial concentration of copper and the current density are the variables defining the size and dispersion of the nanoparticles, as they serve as reactants in the cathodic half-reaction. The presence of surfactants stabilizes the nanoparticle size as their molecules adsorb onto the nanoparticle surface, forming a thick barrier that prevents mass transfer with the exterior and halts further growth.Keywords: copper nanopowder, electrochemical synthesis, current density, surfactant stabilizer
Procedia PDF Downloads 631872 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 4551871 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1411870 Simulation of Complex-Shaped Particle Breakage with a Bonded Particle Model Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.Keywords: bonded particle model, DEM, filter cake, particle breakage
Procedia PDF Downloads 2111869 HRCT of the Chest and the Role of Artificial Intelligence in the Evaluation of Patients with COVID-19
Authors: Parisa Mansour
Abstract:
Introduction: Early diagnosis of coronavirus disease (COVID-19) is extremely important to isolate and treat patients in time, thus preventing the spread of the disease, improving prognosis and reducing mortality. High-resolution computed tomography (HRCT) chest imaging and artificial intelligence (AI)-based analysis of HRCT chest images can play a central role in the treatment of patients with COVID-19. Objective: To investigate different chest HRCT findings in different stages of COVID-19 pneumonia and to evaluate the potential role of artificial intelligence in the quantitative assessment of lung parenchymal involvement in COVID-19 pneumonia. Materials and Methods: This retrospective observational study was conducted between May 1, 2020 and August 13, 2020. The study included 2169 patients with COVID-19 who underwent chest HRCT. HRCT images showed the presence and distribution of lesions such as: ground glass opacity (GGO), compaction, and any special patterns such as septal thickening, inverted halo, mark, etc. HRCT findings of the breast at different stages of the disease (early: andlt) 5 days, intermediate: 6-10 days and late stage: >10 days). A CT severity score (CTSS) was calculated based on the extent of lung involvement on HRCT, which was then correlated with clinical disease severity. Use of artificial intelligence; Analysis of CT pneumonia and quot; An algorithm was used to quantify the extent of pulmonary involvement by calculating the percentage of pulmonary opacity (PO) and gross opacity (PHO). Depending on the type of variables, statistically significant tests such as chi-square, analysis of variance (ANOVA) and post hoc tests were applied when appropriate. Results: Radiological findings were observed in HRCT chest in 1438 patients. A typical pattern of COVID-19 pneumonia, i.e., bilateral peripheral GGO with or without consolidation, was observed in 846 patients. About 294 asymptomatic patients were radiologically positive. Chest HRCT in the early stages of the disease mostly showed GGO. The late stage was indicated by such features as retinal enlargement, thickening and the presence of fibrous bands. Approximately 91.3% of cases with a CTSS = 7 were asymptomatic or clinically mild, while 81.2% of cases with a score = 15 were clinically severe. Mean PO and PHO (30.1 ± 28.0 and 8.4 ± 10.4, respectively) were significantly higher in the clinically severe categories. Conclusion: Because COVID-19 pneumonia progresses rapidly, radiologists and physicians should become familiar with typical TC chest findings to treat patients early, ultimately improving prognosis and reducing mortality. Artificial intelligence can be a valuable tool in treating patients with COVID-19.Keywords: chest, HRCT, covid-19, artificial intelligence, chest HRCT
Procedia PDF Downloads 651868 Influence of the Cooking Technique on the Iodine Content of Frozen Hake
Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre
Abstract:
The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.Keywords: cooking process, ICP-MS, iodine, hake
Procedia PDF Downloads 1421867 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 481866 Design and Evaluation of a Fully-Automated Fluidized Bed Dryer for Complete Drying of Paddy
Authors: R. J. Pontawe, R. C. Martinez, N. T. Asuncion, R. V. Villacorte
Abstract:
Drying of high moisture paddy remains a major problem in the Philippines, especially during inclement weather condition. To alleviate the problem, mechanical dryers were used like a flat bed and recirculating batch-type dryers. However, drying to 14% (wet basis) final moisture content is long which takes 10-12 hours and tedious which is not the ideal for handling high moisture paddy. Fully-automated pilot-scale fluidized bed drying system with 500 kilograms per hour capacity was evaluated using a high moisture paddy. The developed fluidized bed dryer was evaluated using four drying temperatures and two variations in fluidization time at a constant airflow, static pressure and tempering period. Complete drying of paddy with ≥28% (w.b.) initial MC was attained after 2 passes of fluidized-bed drying at 2 minutes exposure to 70 °C drying temperature and 4.9 m/s superficial air velocity, followed by 60 min ambient air tempering period (30 min without ventilation and 30 min with air ventilation) for a total drying time of 2.07 h. Around 82% from normal mechanical drying time was saved at 70 °C drying temperature. The drying cost was calculated to be P0.63 per kilogram of wet paddy. Specific heat energy consumption was only 2.84 MJ/kg of water removed. The Head Rice Yield recovery of the dried paddy passed the Philippine Agricultural Engineering Standards. Sensory evaluation showed that the color and taste of the samples dried in the fluidized bed dryer were comparable to air dried paddy. The optimum drying parameters of using fluidized bed dryer is 70 oC drying temperature at 2 min fluidization time, 4.9 m/s superficial air velocity, 10.16 cm grain depth and 60 min ambient air tempering period.Keywords: drying, fluidized bed dryer, head rice yield, paddy
Procedia PDF Downloads 3271865 Study of the Non-isothermal Crystallization Kinetics of Polypropylene Homopolymer/Impact Copolymer Composites
Authors: Pixiang Wang, Shaoyang Liu, Yucheng Peng
Abstract:
Polypropylene (PP) is an essential material of numerous applications in different industrial sectors, including packaging, construction, and automotive. Because the application of homopolypropylene (HPP) is limited by its relatively low impact strength and high embrittlement temperature, various types of impact copolymer PP (ICPP) that incorporate elastomers/rubbers into HPP to increase impact strength have been successfully commercialized. Crystallization kinetics of an isotactic HPP, an ICPP, and their composites were studied in this work understand the composites’ behaviors better. The Avrami-Jeziorny model was used to describe the crystallization process. For most samples, the Avrami exponent, n, was greater than 3, indicating the crystal grew in three dimensions with spherical geometry. However, the n value could drop below 3 when the ICPP content was 80 wt.% or higher and the cooling rate was 7.5°C/min or lower, implying that the crystals could grow in two dimensions and some lamella structures could be formed under those conditions. The nucleation activity increased with the increase of the ICPP content, demonstrating that the rubber phase in the ICPP acted as a nucleation agent and facilitated the nucleation process. The decrease in crystallization rate after the ICPP content exceeded 60 wt.% might be caused by the excessive amount of crystal nuclei induced by the high ICPP content, which caused strong crystal-crystal interactions and limited the crystal growth space. The nucleation activity and the n value showed high correlations to the mechanical and thermal properties of the materials. The quantitative study of the kinetics of crystallization in this work could be a helpful reference for manufacturing ICPP and HPP/ICPP mixtures.Keywords: polypropylene, crystallization kinetics, Avrami-Jeziorny model, crystallization activation energy, Nucleation activity
Procedia PDF Downloads 881864 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 2681863 Mental Health Monitoring System as an Effort for Prevention and Handling of Psychological Problems in Students
Authors: Arif Tri Setyanto, Aditya Nanda Priyatama, Nugraha Arif Karyanta, Fadjri Kirana A., Afia Fitriani, Rini Setyowati, Moh.Abdul Hakim
Abstract:
The Basic Health Research Report by the Ministry of Health (2018) shows an increase in the prevalence of mental health disorders in the adolescent and early adult age ranges. Supporting this finding, data on the psychological examination of the student health service unit at one State University recorded 115 cases of moderate and severe health problems in the period 2016 - 2019. More specifically, the highest number of cases was experienced by clients in the age range of 21-23 years or equivalent, with the mid-semester stage towards the end. Based on the distribution of cases experienced and the disorder becomes a psychological problem experienced by students. A total of 29% or the equivalent of 33 students experienced anxiety disorders, 25% or 29 students experienced problems ranging from mild to severe, as well as other classifications of disorders experienced, including adjustment disorders, family problems, academics, mood disorders, self-concept disorders, personality disorders, cognitive disorders, and others such as trauma and sexual disorders. Various mental health disorders have a significant impact on the academic life of students, such as low GPA, exceeding the limit in college, dropping out, disruption of social life on campus, to suicide. Based on literature reviews and best practices from universities in various countries, one of the effective ways to prevent and treat student mental health disorders is to implement a mental health monitoring system in universities. This study uses a participatory action research approach, with a sample of 423 from a total population of 32,112 students. The scale used in this study is the Beck Depression Inventory (BDI) to measure depression and the Taylor Minnesota Anxiety Scale (TMAS) to measure anxiety levels. This study aims to (1) develop a digital-based health monitoring system for students' mental health situations in the mental health category. , dangers, or those who have mental disorders, especially indications of symptoms of depression and anxiety disorders, and (2) implementing a mental health monitoring system in universities at the beginning and end of each semester. The results of the analysis show that from 423 respondents, the main problems faced by all coursework, such as thesis and academic assignments. Based on the scoring and categorization of the Beck Depression Inventory (BDI), 191 students experienced symptoms of depression. A total of 24.35%, or 103 students experienced mild depression, 14.42% (61 students) had moderate depression, and 6.38% (27 students) experienced severe or extreme depression. Furthermore, as many as 80.38% (340 students) experienced anxiety in the high category. This article will review this review of the student mental health service system on campus.Keywords: monitoring system, mental health, psychological problems, students
Procedia PDF Downloads 1131862 STR and SNP Markers of Y-Chromosome Unveil Similarity between the Gene Pool of Kurds and Yezidis
Authors: M. Chukhryaeva, R. Skhalyakho, J. Kagazegeva, E. Pocheshkhova, L. Yepiskopossyan, O. Balanovsky, E. Balanovska
Abstract:
The Middle East is crossroad of different populations at different times. The Kurds are of particular interest in this region. Historical sources suggested that the origin of the Kurds is associated with Medes. Therefore, it was especially interesting to compare gene pool of Kurds with other supposed descendants of Medes-Tats. Yezidis are ethno confessional group of Kurds. Yezidism as a confessional teaching was formed in the XI-XIII centuries in Iraq. Yezidism has caused reproductively isolation of Yezidis from neighboring populations for centuries. Also, isolation helps to retain Yezidian caste system. It is unknown how the history of Yezidis affected its genу pool because it has never been the object of researching. We have examined the Y-chromosome variation in Yezidis and Kurdish males to understand their gene pool. We collected DNA samples from 90 Yezidi males and 24 Kurdish males together with their pedigrees. We performed Y-STR analysis of 17 loci in the samples collected (Yfiler system from Applied Biosystems) and analysis of 42 Y-SNPs by real-time PCR. We compared our data with published data from other Kurdish groups and from European, Caucasian, and West Asian populations. We found that gene pool of Yezidis contains haplogroups common in the Middle East (J-M172(xM67,M12)- 24%, E-M35(xM78)- 9%) and in South Western Asia (R-M124- 8%) and variant with wide distribution area - R-M198(xM458- 9%). The gene pool of Kurdish has higher genetic diversity than Yezidis. Their dominants haplogroups are R-M198- 20,3 %, E-M35- 9%, J-M172- 9%. Multidimensional scaling also shows that the Kurds and Yezidis are part of the same frontier Asian cluster, which, in addition, included Armenians, Iranians, Turks, and Greeks. At the same time, the peoples of the Caucasus and Europe form isolated clusters that do not overlap with the Asian clusters. It is noteworthy that Kurds from our study gravitate towards Tats, which indicates that most likely these two populations are descendants of ancient Medes population. Multidimensional scaling also reveals similarity between gene pool of Yezidis, Kurds with Armenians and Iranians. The analysis of Yezidis pedigrees and their STR variability did not reveal a reliable connection between genetic diversity and caste system. This indicates that the Yezidis caste system is a social division and not a biological one. Thus, we showed that, despite many years of isolation, the gene pool of Yezidis retained a common layer with the gene pool of Kurds, these populations have common spectrum of haplogroups, but Yezidis have lower genetic diversity than Kurds. This study received primary support from the RSF grant No. 16-36-00122 to MC and grant No. 16-06-00364 to EP.Keywords: gene pool, haplogroup, Kurds, SNP and STR markers, Yezidis
Procedia PDF Downloads 2051861 Designing Form, Meanings, and Relationships for Future Industrial Products. Case Study Observation of PAD
Authors: Elisabetta Cianfanelli, Margherita Tufarelli, Paolo Pupparo
Abstract:
The dialectical mediation between desires and objects or between mass production and consumption continues to evolve over time. This relationship is influenced both by variable geometries of contexts that are distant from the mere design of product form and by aspects rooted in the very definition of industrial design. In particular, the overcoming of macro-areas of innovation in the technological, social, cultural, formal, and morphological spheres, supported by recent theories in critical and speculative design, seems to be moving further and further away from the design of the formal dimension of advanced products. The articulated fabric of theories and practices that feed the definition of “hyperobjects”, and no longer objects describes a common tension in all areas of design and production of industrial products. The latter are increasingly detached from the design of the form and meaning of the same in mass productions, thus losing the quality of products capable of social transformation. For years we have been living in a transformative moment as regards the design process in the definition of the industrial product. We are faced with a dichotomy in which there is, on the one hand, a reactionary aversion to the new techniques of industrial production and, on the other hand, a sterile adoption of the techniques of mass production that we can now consider traditional. This ambiguity becomes even more evident when we talk about industrial products, and we realize that we are moving further and further away from the concepts of "form" as a synthesis of a design thought aimed at the aesthetic-emotional component as well as the functional one. The design of forms and their contents, as statutes of social acts, allows us to investigate the tension on mass production that crosses seasons, trends, technicalities, and sterile determinisms. The design culture has always determined the formal qualities of objects as a sum of aesthetic characteristics functional and structural relationships that define a product as a coherent unit. The contribution proposes a reflection and a series of practical experiences of research on the form of advanced products. This form is understood as a kaleidoscope of relationships through the search for an identity, the desire for democratization, and between these two, the exploration of the aesthetic factor. The study of form also corresponds to the study of production processes, technological innovations, the definition of standards, distribution, advertising, the vicissitudes of taste and lifestyles. Specifically, we will investigate how the genesis of new forms for new meanings introduces a change in the relative innovative production techniques. It becomes, therefore, fundamental to investigate, through the reflections and the case studies exposed inside the contribution, also the new techniques of production and elaboration of the forms of the products, as new immanent and determining element inside the planning process.Keywords: industrial design, product advanced design, mass productions, new meanings
Procedia PDF Downloads 1231860 Bone Mineral Density and Frequency of Low-Trauma Fractures in Ukrainian Women with Metabolic Syndrome
Authors: Vladyslav Povoroznyuk, Larysa Martynyuk, Iryna Syzonenko, Liliya Martynyuk
Abstract:
Osteoporosis is one of the important problems in postmenopausal women due to an increased risk of sudden and unexpected fractures. This study is aimed to determine the connection between bone mineral density (BMD) and trabecular bone score (TBS) in Ukrainian women suffering from metabolic syndrome. Participating in the study, 566 menopausal women aged 50-79 year-old were examined and divided into two groups: Group A included 336 women with no obesity (BMI ≤ 29.9 kg/m2), and Group B – 230 women with metabolic syndrome (diagnosis according to IDF criteria, 2005). Dual-energy X-ray absorptiometry was used for measuring of lumbar spine (L1-L4), femoral neck, total body and forearm BMD and bone quality indexes (last according to Med-Imaps installation). Data were analyzed using Statistical Package 6.0. A significant increase of lumbar spine (L1-L4), femoral neck, total body and ultradistal radius BMD was found in women with metabolic syndrome compared to those without obesity (p < 0.001) both in their totality and in groups of 50-59 years, 60-69 years, and 70-79 years. TBS was significantly higher in non-obese women compared to metabolic syndrome patients of 50-59 years and in the general sample (p < 0.05). Analysis showed significant positive correlation between body mass index (BMI) and BMD at all levels. Significant negative correlation between BMI and TBS (L1-L4) was established. Despite the fact that BMD indexes were significantly higher in women with metabolic syndrome, the frequency of vertebral and non-vertebral fractures did not differ significantly in the groups of patients.Keywords: bone mineral density, trabecular bone score, metabolic syndrome, fracture
Procedia PDF Downloads 2851859 Copper Price Prediction Model for Various Economic Situations
Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin
Abstract:
Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.Keywords: copper prices, prediction model, neural network, time series forecasting
Procedia PDF Downloads 1151858 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants
Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey
Abstract:
The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model
Procedia PDF Downloads 1441857 Green Computing: Awareness and Practice in a University Information Technology Department
Authors: Samson Temitope Obafemi
Abstract:
The fact that ICTs is pervasive in today’s society paradoxically also calls for the need for green computing. Green computing generally encompasses the study and practice of using Information and Communication Technology (ICT) resources effectively and efficiently without negatively affecting the environment. Since the emergence of this innovation, manufacturers and governmental bodies such as Energy Star and the United State of America’s government have obviously invested many resources in ensuring the reality of green design, manufacture, and disposal of ICTs. However, the level of adherence to green use of ICTs among users have been less accounted for especially in developing ICT consuming nations. This paper, therefore, focuses on examining the awareness and practice of green computing among academics and students of the Information Technology Department of Durban University of Technology, Durban South Africa, in the context of green use of ICTs. This was achieved through a survey that involved the use of a questionnaire with four sections: (a) demography of respondents, (b) Awareness of green computing, (c) practices of green computing, and (d) attitude towards greener computing. One hundred and fifty (150) questionnaires were distributed, one hundred and twenty (125) were completed and collected for data analysis. Out of the one hundred and twenty-five (125) respondents, twenty-five percent (25%) were academics while the remaining seventy-five percent (75%) were students. The result showed a higher level of awareness of green computing among academics when compared to the students. Green computing practices are also shown to be highly adhered to among academics only. However, interestingly, the students were found to be more enthusiastic towards greener computing in the future. The study, therefore, suggests that the awareness of green computing should be further strengthened among students from the curriculum point of view in order to improve on the greener use of ICTs in universities especially in developing countries.Keywords: awareness, green computing, green use, information technology
Procedia PDF Downloads 1951856 Multiscale Cohesive Zone Modeling of Composite Microstructure
Authors: Vincent Iacobellis, Kamran Behdinan
Abstract:
A finite element cohesive zone model is used to predict the temperature dependent material properties of a polyimide matrix composite with unidirectional carbon fiber arrangement. The cohesive zone parameters have been obtained from previous research involving an atomistic-to-continuum multiscale simulation of the fiber-matrix interface using the bridging cell multiscale method. The goal of the research was to both investigate the effect of temperature change on the composite behavior with respect to transverse loading as well as the validate the use of cohesive parameters obtained from atomistic-to-continuum multiscale modeling to predict fiber-matrix interfacial cracking. From the multiscale model cohesive zone parameters (i.e. maximum traction and energy of separation) were obtained by modeling the interface between the coarse-grained polyimide matrix and graphite based carbon fiber. The cohesive parameters from this simulation were used in a cohesive zone model of the composite microstructure in order to predict the properties of the macroscale composite with respect to changes in temperature ranging from 21 ˚C to 316 ˚C. Good agreement was found between the microscale RUC model and experimental results for stress-strain response, stiffness, and material strength at low and high temperatures. Examination of the deformation of the composite through localized crack initiation at the fiber-matrix interface also agreed with experimental observations of similar phenomena. Overall, the cohesive zone model was shown to be both effective at modeling the composite properties with respect to transverse loading as well as validated the use of cohesive zone parameters obtained from the multiscale simulation.Keywords: cohesive zone model, fiber-matrix interface, microscale damage, multiscale modeling
Procedia PDF Downloads 4881855 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 1521854 Exploratory Tests of Crude Bacteriocins from Autochthonous Lactic Acid Bacteria against Food-Borne Pathogens and Spoilage Bacteria
Authors: M. Naimi, M. B. Khaled
Abstract:
The aim of the present work was to test in vitro inhibition of food pathogens and spoilage bacteria by crude bacteriocins from autochthonous lactic acid bacteria. Thirty autochthonous lactic acid bacteria isolated previously, belonging to the genera: Lactobacillus, Carnobacterium, Lactococcus, Vagococcus, Streptococcus, and Pediococcus, have been screened by an agar spot test and a well diffusion assay against Gram-positive and Gram-negative harmful bacteria: Bacillus cereus, Bacillus subtilis ATCC 6633, Escherichia coli ATCC 8739, Salmonella typhimurium ATCC 14028, Staphylococcus aureus ATCC 6538, and Pseudomonas aeruginosa under conditions means to reduce lactic acid and hydrogen peroxide effect to select bacteria with high bacteriocinogenic potential. Furthermore, crude bacteriocins semiquantification and heat sensitivity to different temperatures (80, 95, 110°C, and 121°C) were performed. Another exploratory test concerning the response of St. aureus ATCC 6538 to the presence of crude bacteriocins was realized. It has been observed by the agar spot test that fifteen candidates were active toward Gram-positive targets strains. The secondary screening demonstrated an antagonistic activity oriented only against St. aureus ATCC 6538, leading to the selection of five isolates: Lm14, Lm21, Lm23, Lm24, and Lm25 with a larger inhibition zone compared to the others. The ANOVA statistical analysis reveals a small variation of repeatability: Lm21: 0.56%, Lm23: 0%, Lm25: 1.67%, Lm14: 1.88%, Lm24: 2.14%. Conversely, slight variation was reported in terms of inhibition diameters: 9.58± 0.40, 9.83± 0.46, and 10.16± 0.24 8.5 ± 0.40 10 mm for, Lm21, Lm23, Lm25, Lm14and Lm24, indicating that the observed potential showed a heterogeneous distribution (BMS = 0.383, WMS = 0.117). The repeatability coefficient calculated displayed 7.35%. As for the bacteriocins semiquantification, the five samples exhibited production amounts about 4.16 for Lm21, Lm23, Lm25 and 2.08 AU/ml for Lm14, Lm24. Concerning the sensitivity the crude bacteriocins were fully insensitive to heat inactivation, until 121°C, they preserved the same inhibition diameter. As to, kinetic of growth , the µmax showed reductions in pathogens load for Lm21, Lm23, Lm25, Lm14, Lm24 of about 42.92%, 84.12%, 88.55%, 54.95%, 29.97% in the second trails. Inversely, this pathogen growth after five hours displayed differences of 79.45%, 12.64%, 11.82%, 87.88%, 85.66% in the second trails, compared to the control. This study showed potential inhibition to the growth of this food pathogen, suggesting the possibility to improve the hygienic food quality.Keywords: exploratory test, lactic acid bacteria, crude bacteriocins, spoilage, pathogens
Procedia PDF Downloads 2131853 Design and Modeling of Light Duty Trencher
Authors: Yegetaneh T. Dejenu, Delesa Kejela, Abdulak Alemu
Abstract:
From the earliest time of humankind, the trenches were used for water to flow along and for soldiers to hide in during enemy attacks. Now a day due to civilization, the needs of the human being become endless, and the living condition becomes sophisticated. The unbalance between the needs and resource obligates them to find the way to manage this condition. The attempt to use the scares resource in very efficient and effective way makes the trench an endeavor practice in the world in all countries. A trencher is a construction equipment used to dig trenches, especially for laying pipes or cables, installing drainage, irrigation, installing fencing, and in preparation for trench warfare. It is a machine used to make a ditch by cutting the soil ground and effectively used in agricultural irrigation. The most common types of trencher are wheel trencher, chain trencher, micro trencher, portable trencher. In Ethiopia people have been trenching the ditch for many purposes and the tools they are using are Pickaxe, Shovel and some are using Micro Excavators. The adverse effect of using traditional equipment is, time and energy consuming, less productive, difficult and more man power is required. Hence it is necessary to design and produce low price, and simple machine to narrow this gap. Our objective is to design and model a light duty trencher that is used for trenching the ground or soil for making ditch and used for agricultural, ground cabling, ground piping, and drainage system. The designed machine trenches, maximum of 1-meter depth, 30 cm width, and the required length. The working mechanism is fully hydraulic, and the engine with 12.7 hp will provide suitable power for the pump that delivers 23 l/min at 1500 rpm to drive hydraulic motors and actuators.Keywords: hydraulics, modelling, trenching, ditch
Procedia PDF Downloads 2151852 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 2321851 Role of Platelet Volume Indices in Diabetes Related Vascular Angiopathies
Authors: Mitakshara Sharma, S. K. Nema, Sanjeev Narang
Abstract:
Diabetes mellitus (DM) is a group of metabolic disorders characterized by metabolic abnormalities, chronic hyperglycaemia and long term macrovascular & microvascular complications. Vascular complications are due to platelet hyperactivity and dysfunction, increased inflammation, altered coagulation and endothelial dysfunction. Large proportion of patients with Type II DM suffers from preventable vascular angiopathies, and there is need to develop risk factor modifications and interventions to reduce impact of complications. These complications are attributed to platelet activation, recognised by increase in Platelet Volume Indices (PVI) including Mean Platelet Volume (MPV) and Platelet Distribution Width (PDW). The current study is prospective analytical study conducted over 2 years. Out of 1100 individuals, 930 individuals fulfilled inclusion criteria and were segregated into three groups on basis of glycosylated haemoglobin (HbA1C): - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose (IFG) with 300 individuals in IFG and non-diabetic groups & 330 individuals in diabetic group. Further, diabetic group was divided into two groups on the basis of presence or absence of known diabetes related vascular complications. Samples for HbA1c and PVI were collected using Ethylene diamine tetraacetic acid (EDTA) as anticoagulant and processed on SYSMEX-X-800i autoanalyser. The study revealed gradual increase in PVI from non-diabetics to IFG to diabetics. PVI were markedly increased in diabetic patients. MPV and PDW of diabetics, IFG and non diabetics were (17.60 ± 2.04)fl, (11.76 ± 0.73)fl, (9.93 ± 0.64)fl and (19.17 ± 1.48)fl, (15.49 ± 0.67)fl, (10.59 ± 0.67)fl respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). MPV & PDW of subjects with diabetes related complications were higher as compared to those without them and were (17.51±0.39)fl & (15.14 ± 1.04)fl and (20.09 ± 0.98) fl & (18.96 ± 0.83)fl respectively with a significant p value 0.00. There was a significant positive correlation between PVI and duration of diabetes across the groups (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, a significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). This is multi-parameter and comprehensive study with an adequately powered study design. It can be concluded from our study that PVI are extremely useful and important indicators of impending vascular complications in all patients with deranged glycaemic control. Introduction of automated cell counters has facilitated the availability of PVI as routine parameters. PVI is a useful means for identifying larger & active platelets which play important role in development of micro and macro angiopathic complications of diabetes leading to mortality and morbidity. PVI can be used as cost effective markers to predict and prevent impending vascular events in patients with Diabetes mellitus especially in developing countries like India. PVI, if incorporated into protocols for management of diabetes, could revolutionize care and curtail the ever increasing cost of patient management.Keywords: diabetes, IFG, HbA1C, MPV, PDW, PVI
Procedia PDF Downloads 2591850 An Experimental Investigation on Productivity and Performance of an Improved Design of Basin Type Solar Still
Authors: Mahmoud S. El-Sebaey, Asko Ellman, Ahmed Hegazy, Tarek Ghonim
Abstract:
Due to population growth, the need for drinkable healthy water is highly increased. Consequently, and since the conventional sources of water are limited, researchers devoted their efforts to oceans and seas for obtaining fresh drinkable water by thermal distillation. The current work is dedicated to the design and fabrication of modified solar still model, as well as conventional solar still for the sake of comparison. The modified still is single slope double basin solar still. The still consists of a lower basin with a dimension of 1000 mm x 1000 mm which contains the sea water, as well as the top basin that made with 4 mm acrylic, was temporarily kept on the supporting strips permanently fixed with the side walls. Equally ten spaced vertical glass strips of 50 mm height and 3 mm thickness were provided at the upper basin for the stagnancy of the water. Window glass of 3 mm was used as the transparent cover with 23° inclination at the top of the still. Furthermore, the performance evaluation and comparison of these two models in converting salty seawater into drinkable freshwater are introduced, analyzed and discussed. The experiments were performed during the period from June to July 2018 at seawater depths of 2, 3, 4 and 5 cm. Additionally, the solar still models were operated simultaneously in the same climatic conditions to analyze the influence of the modifications on the freshwater output. It can be concluded that the modified design of double basin single slope solar still shows the maximum freshwater output at all water depths tested. The results showed that the daily productivity for modified and conventional solar still was 2.9 and 1.8 dm³/m² day, indicating an increase of 60% in fresh water production.Keywords: freshwater output, solar still, solar energy, thermal desalination
Procedia PDF Downloads 1361849 Evaluation of Environmental Disclosures on Financial Performance of Quoted Industrial Goods Manufacturing Sectors in Nigeria (2011 – 2020)
Authors: C. C. Chima, C. J. M. Anumaka
Abstract:
This study evaluates environmental disclosures on the financial performance of quoted industrial goods manufacturing sectors in Nigeria. The study employed a quasi-experimental research design to establish the relationship that exists between the environmental disclosure index and financial performance indices (return on assets - ROA, return on equity - ROE, and earnings per share - EPS). A purposeful sampling technique was employed to select five (5) industrial goods manufacturing sectors quoted on the Nigerian Stock Exchange. Secondary data covering 2011 to 2020 financial years were extracted from annual reports of the study sectors using a content analysis method. The data were analyzed using SPSS, Version 23. Panel Ordinary Least Squares (OLS) regression method was employed in estimating the unknown parameters in the study’s regression model after conducting diagnostic and preliminary tests to ascertain that the data set are reliable and not misleading. Empirical results show that there is an insignificant negative relationship between the environmental disclosure index (EDI) and the performance indices (ROA, ROE, and EPS) of the industrial goods manufacturing sectors in Nigeria. The study recommends that: only relevant information which increases the performance indices should appear on the disclosure checklist; environmental disclosure practices should be country-specific; and company executives in Nigeria should increase and monitor the level of investment (resources, time, and energy) in order to ensure that environmental disclosure has a significant impact on financial performance.Keywords: earnings per share, environmental disclosures, return on assets, return on equity
Procedia PDF Downloads 861848 Simulating the Effect of Chlorine on Dynamic of Main Aquatic Species in Urban Lake with a Mini System Dynamic Model
Authors: Zhiqiang Yan, Chen Fan, Beicheng Xia
Abstract:
Urban lakes play an invaluable role in urban water systems such as flood control, landscape, entertainment, and energy utilization, and have suffered from severe eutrophication over the past few years. To investigate the ecological response of main aquatic species and system stability to chlorine interference in shallow urban lakes, a mini system dynamic model, based on the competition and predation of main aquatic species and TP circulation, was developed. The main species of submerged macrophyte, phytoplankton, zooplankton, benthos and TP in water and sediment were simulated as variables in the model with the interference of chlorine which effect function was attenuation equation. The model was validated by the data which was investigated in the Lotus Lake in Guangzhou from October 1, 2015 to January 31, 2016. Furthermore, the eco-exergy was used to analyze the change in complexity of the shallow urban lake. The results showed the correlation coefficient between observed and simulated values of all components presented significant. Chlorine showed a significant inhibitory effect on Microcystis aeruginosa,Rachionus plicatilis, Diaphanosoma brachyurum Liévin and Mesocyclops leuckarti (Claus).The outbreak of Spiroggra spp. inhibited the growth of Vallisneria natans (Lour.) Hara, caused a gradual decrease of eco-exergy, reflecting the breakdown of ecosystem internal equilibria. It was concluded that the study gives important insight into using chlorine to achieve eutrophication control and understand mechanism process.Keywords: system dynamic model, urban lake, chlorine, eco-exergy
Procedia PDF Downloads 2091847 Computational Investigation of V599 Mutations of BRAF Protein and Its Control over the Therapeutic Outcome under the Malignant Condition
Authors: Mayank, Navneet Kaur, Narinder Singh
Abstract:
The V599 mutations in the BRAF protein are extremely oncogenic, responsible for countless of malignant conditions. Along with wild type, V599E, V599D, and V599R are the important mutated variants of the BRAF proteins. The BRAF inhibitory anticancer agents are continuously developing, and sorafenib is a BRAF inhibitor that is under clinical use. The crystal structure of sorafenib bounded to wild type, and V599 is known, showing a similar interaction pattern in both the case. The mutated 599th residue, in both the case, is also found not interacting directly with the co-crystallized sorafenib molecule. However, the IC50 value of sorafenib was found extremely different in both the case, i.e., 22 nmol/L for wild and 38 nmol/L for V599E protein. Molecular docking study and MMGBSA binding energy results also revealed a significant difference in the binding pattern of sorafenib in both the case. Therefore, to explore the role of distinctively situated 599th residue, we have further conducted comprehensive computational studies. The molecular dynamics simulation, residue interaction network (RIN) analysis, and residue correlation study results revealed the importance of the 599th residue on the therapeutic outcome and overall dynamic of the BRAF protein. Therefore, although the position of 599th residue is very much distinctive from the ligand-binding cavity of BRAF, still it has exceptional control over the overall functional outcome of the protein. The insight obtained here may seem extremely important and guide us while designing ideal BRAF inhibitory anticancer molecules.Keywords: BRAF, oncogenic, sorafenib, computational studies
Procedia PDF Downloads 1161846 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces
Authors: Igor Pejic
Abstract:
The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.Keywords: asymmetrical warfare, conventional forces, security, terrorism
Procedia PDF Downloads 264