Search results for: applied pharmacology
1258 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 651257 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2
Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen
Abstract:
Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel
Procedia PDF Downloads 1481256 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health
Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz
Abstract:
COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia
Procedia PDF Downloads 911255 Rapid Flood Damage Assessment of Population and Crops Using Remotely Sensed Data
Authors: Urooj Saeed, Sajid Rashid Ahmad, Iqra Khalid, Sahar Mirza, Imtiaz Younas
Abstract:
Pakistan, a flood-prone country, has experienced worst floods in the recent past which have caused extensive damage to the urban and rural areas by loss of lives, damage to infrastructure and agricultural fields. Poor flood management system in the country has projected the risks of damages as the increasing frequency and magnitude of floods are felt as a consequence of climate change; affecting national economy directly or indirectly. To combat the needs of flood emergency, this paper focuses on remotely sensed data based approach for rapid mapping and monitoring of flood extent and its damages so that fast dissemination of information can be done, from local to national level. In this research study, spatial extent of the flooding caused by heavy rains of 2014 has been mapped by using space borne data to assess the crop damages and affected population in sixteen districts of Punjab. For this purpose, moderate resolution imaging spectroradiometer (MODIS) was used to daily mark the flood extent by using Normalised Difference Water Index (NDWI). The highest flood value data was integrated with the LandScan 2014, 1km x 1km grid based population, to calculate the affected population in flood hazard zone. It was estimated that the floods covered an area of 16,870 square kilometers, with 3.0 million population affected. Moreover, to assess the flood damages, Object Based Image Analysis (OBIA) aided with spectral signatures was applied on Landsat image to attain the thematic layers of healthy (0.54 million acre) and damaged crops (0.43 million acre). The study yields that the population of Jhang district (28% of 2.5 million population) was affected the most. Whereas, in terms of crops, Jhang and Muzzafargarh are the ‘highest damaged’ ranked district of floods 2014 in Punjab. This study was completed within 24 hours of the peak flood time, and proves to be an effective methodology for rapid assessment of damages due to flood hazardKeywords: flood hazard, space borne data, object based image analysis, rapid damage assessment
Procedia PDF Downloads 3281254 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 1101253 An Exploratory Approach of the Latin American Migrants’ Urban Space Transformation of Antofagasta City, Chile
Authors: Carolina Arriagada, Yasna Contreras
Abstract:
Since mid-2000, the migratory flows of Latin American migrants to Chile have been increasing constantly. There are two reasons that would explain why Chile is presented as an attractive country for the migrants. On the one hand, traditional centres of migrants’ attraction such as the United States and Europe have begun to close their borders. On the other hand, Chile exhibits relative economic and political stability, which offers greater job opportunities and better standard of living when compared to the migrants’ origin country. At the same time, the neoliberal economic model of Chile, developed under an extractive production of the natural resources, has privatized the urban space. The market regulates the growth of the fragmented and segregated cities. Then, the vulnerable population, most of the time, is located in the periphery and in the marginal areas of the urban space. In this aspect, the migrants have begun to occupy those degraded and depressed areas of the city. The problem raised is that the increase of the social spatial segregation could be also attributed to the migrants´ occupation of the marginal urban places of the city. The aim of this investigation is to carry out an analysis of the migrants’ housing strategies, which are transforming the marginal areas of the city. The methodology focused on the urban experience of the migrants, through the observation of spatial practices, ways of living and networks configuration in order to transform the marginal territory. The techniques applied in this study are semi–structured interviews in-depth interviews. The study reveals that the migrants housing strategies for living in the marginal areas of the city are built on a paradox way. On the one hand, the migrants choose proximity to their place of origin, maintaining their identity and customs. On the other hand, the migrants choose proximity to their social and familiar places, generating sense of belonging. In conclusion, the migration as international displacements under a globalized economic model increasing socio spatial segregation in cities is evidenced, but the transformation of the marginal areas is a fundamental resource of their integration migratory process. The importance of this research is that it is everybody´s responsibility not only the right to live in a city without any discrimination but also to integrate the citizens within the social urban space of a city.Keywords: migrations, marginal space, resignification, visibility
Procedia PDF Downloads 1421252 Engineering Topology of Construction Ecology in Urban Environments: Suez Canal Economic Zone
Authors: Moustafa Osman Mohammed
Abstract:
Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). Construction ecology approach feedback energy from resources flows between biotic and abiotic in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.Keywords: construction ecology, industrial ecology, urban topology, environmental planning
Procedia PDF Downloads 1301251 The Effect of Realizing Emotional Synchrony with Teachers or Peers on Children’s Linguistic Proficiency: The Case Study of Uji Elementary School
Authors: Reiko Yamamoto
Abstract:
This paper reports on a joint research project in which a researcher in applied linguistics and elementary school teachers in Japan explored new ways to realize emotional synchrony in a classroom in childhood education. The primary purpose of this project was to develop a cross-curriculum of the first language (L1) and second language (L2) based on the concept of plurilingualism. This concept is common in Europe, and can-do statements are used in forming the standard of linguistic proficiency in any language; these are attributed to the action-oriented approach in the Common European Framework of Reference for Languages (CEFR). CEFR has a basic tenet of language education: improving communicative competence. Can-do statements are classified into five categories based on the tenet: reading, writing, listening, speaking/ interaction, and speaking/ speech. The first approach of this research was to specify the linguistic proficiency of the children, who are still developing their L1. Elementary school teachers brainstormed and specified the linguistic proficiency of the children as the competency needed to synchronize with others – teachers or peers – physically and mentally. The teachers formed original can-do statements in language proficiency on the basis of the idea that emotional synchrony leads to understanding others in communication. The research objectives are to determine the effect of language education based on the newly developed curriculum and can-do statements. The participants of the experiment were 72 third-graders in Uji Elementary School, Japan. For the experiment, 17 items were developed from the can-do statements formed by the teachers and divided into the same five categories as those of CEFR. A can-do checklist consisting of the items was created. The experiment consisted of three steps: first, the students evaluated themselves using the can-do checklist at the beginning of the school year. Second, one year of instruction was given to the students in Japanese and English classes (six periods a week). Third, the students evaluated themselves using the same can-do checklist at the end of the school year. The results of statistical analysis showed an enhancement of linguistic proficiency of the students. The average results of the post-check exceeded that of the pre-check in 12 out of the 17 items. Moreover, significant differences were shown in four items, three of which belonged to the same category: speaking/ interaction. It is concluded that children can get to understand others’ minds through physical and emotional synchrony. In particular, emotional synchrony is what teachers should aim at in childhood education.Keywords: elementary school education, emotional synchrony, language proficiency, sympathy with others
Procedia PDF Downloads 1681250 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 1091249 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 1271248 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 1411247 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 5091246 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 761245 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 5031244 Influence of Plant Cover and Redistributing Rainfall on Green Roof Retention and Plant Drought Stress
Authors: Lubaina Soni, Claire Farrell, Christopher Szota, Tim D. Fletcher
Abstract:
Green roofs are a promising engineered ecosystem for reducing stormwater runoff and restoring vegetation cover in cities. Plants can contribute to rainfall retention by rapidly depleting water in the substrate; however, this increases the risk of plant drought stress. Green roof configurations, therefore, need to provide plants the opportunity to efficiently deplete the substrate but also avoid severe drought stress. This study used green roof modules placed in a rainout shelter during a six-month rainfall regime simulated in Melbourne, Australia. Rainfall was applied equally with an overhead irrigation system on each module. Aside from rainfall, modules were under natural climatic conditions, including temperature, wind, and radiation. A single species, Ficinia nodosa, was planted with five different treatments and three replicates of each treatment. In this experiment, we tested the impact of three plant cover treatments (0%, 50% and 100%) on rainfall retention and plant drought stress. We also installed two runoff zone treatments covering 50% of the substrate surface for additional modules with 0% and 50% plant cover to determine whether directing rainfall resources towards plant roots would reduce drought stress without impacting rainfall retention. The retention performance for the simulated rainfall events was measured, quantifying all components for hydrological performance and survival on green roofs. We found that evapotranspiration and rainfall retention were similar for modules with 50% and 100% plant cover. However, modules with 100% plant cover showed significantly higher plant drought stress. Therefore, planting at a lower cover/density reduced plant drought stress without jeopardizing rainfall retention performance. Installing runoff zones marginally reduced evapotranspiration and rainfall retention, but by approximately the same amount for modules with 0% and 50% plant cover. This indicates that reduced evaporation due to the installation of the runoff zones likely contributed to reduced evapotranspiration and rainfall retention. Further, runoff occurred from modules with runoff zones faster than those without, indicating that we created a faster pathway for water to enter and leave the substrate, which also likely contributed to lower overall evapotranspiration and retention. However, despite some loss in retention performance, modules with 50% plant cover installed with runoff zones showed significantly lower drought stress in plants compared to those without runoff zones. Overall, we suggest that reducing plant cover represents a simple means of optimizing green roof performance but creating runoff zones may reduce plant drought stress at the cost of reduced rainfall retention.Keywords: green roof, plant cover, plant drought stress, rainfall retention
Procedia PDF Downloads 1151243 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance
Authors: Jana Soler Libran
Abstract:
The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth
Procedia PDF Downloads 1241242 Micro-Milling Process Development of Advanced Materials
Authors: M. A. Hafiz, P. T. Matevenga
Abstract:
Micro-level machining of metals is a developing field which has shown to be a prospective approach to produce features on the parts in the range of a few to a few hundred microns with acceptable machining quality. It is known that the mechanics (i.e. the material removal mechanism) of micro-machining and conventional machining have significant differences due to the scaling effects associated with tool-geometry, tool material and work piece material characteristics. Shape memory alloys (SMAs) are those metal alloys which display two exceptional properties, pseudoelasticity and the shape memory effect (SME). Nickel-titanium (NiTi) alloys are one of those unique metal alloys. NiTi alloys are known to be difficult-to-cut materials specifically by using conventional machining techniques due to their explicit properties. Their high ductility, high amount of strain hardening, and unusual stress–strain behaviour are the main properties accountable for their poor machinability in terms of tool wear and work piece quality. The motivation of this research work was to address the challenges and issues of micro-machining combining with those of machining of NiTi alloy which can affect the desired performance level of machining outputs. To explore the significance of range of cutting conditions on surface roughness and tool wear, machining tests were conducted on NiTi. Influence of different cutting conditions and cutting tools on surface and sub-surface deformation in work piece was investigated. Design of experiments strategy (L9 Array) was applied to determine the key process variables. The dominant cutting parameters were determined by analysis of variance. These findings showed that feed rate was the dominant factor on surface roughness whereas depth of cut found to be dominant factor as far as tool wear was concerned. The lowest surface roughness was achieved at the feed rate of equal to the cutting edge radius where as the lowest flank wear was observed at lowest depth of cut. Repeated machining trials have yet to be carried out in order to observe the tool life, sub-surface deformation and strain induced hardening which are also expecting to be amongst the critical issues in micro machining of NiTi. The machining performance using different cutting fluids and strategies have yet to be studied.Keywords: nickel titanium, micro-machining, surface roughness, machinability
Procedia PDF Downloads 3401241 Genotyping of Rotaviruses in Pediatric Patients with Gastroenteritis by Using Real-Time Reverse Transcription Polymerase Chain Reaction
Authors: Recep Kesli, Cengiz Demir, Riza Durmaz, Zekiye Bakkaloglu, Aysegul Bukulmez
Abstract:
Objective: Acute diarrhea disease in children is a major cause of morbidity worldwide and is a leading cause of mortality, and it is the most common agent responsible for acute gastroenteritis in developing countries. With hospitalized children suffering from acute enteric disease up to 50% of the analyzed specimen were positive for rotavirus. Further molecular surveillance could provide a sound basis for improving the response to epidemic gastroenteritis and could provide data needed for the introduction of vaccination programmes in the country. The aim of this study was to investigate the prevalence of viral etiology of the gastroenteritis in children aged 0-6 years with acute gastroenteritis and to determine predominant genotypes of rotaviruses in the province of Afyonkarahisar, Turkey. Methods: An epidemiological study on rotavirus was carried out during 2016. Fecal samples obtained from the 144 rotavirus positive children with 0-6 years of ages and applied to the Pediatric Diseases Outpatient of ANS Research and Practice Hospital, Afyon Kocatepe University with the complaint of diarrhea. Bacterial agents causing gastroenteritis were excluded by using bacteriological culture methods and finally, no growth observed. Rotavirus antigen was examined by both the immunochromatographic (One Step Rotavirus and Adenovirus Combo Test, China) and ELISA (Premier Rotaclone, USA) methods in stool samples. Rotavirus RNA was detected by using one step real-time reverse transcription-polymerase chain reaction (RT-PCR). G and P genotypes were determined using RT-PCR with consensus primers of VP7 and VP4 genes, followed by semi nested type-specific multiplex PCR. Results: Of the total 144 rotavirus antigen-positive samples with RT-PCR, 4 (2,8%) were rejected, 95 (66%) were examined, and 45 (31,2%) have not been examined for PCR yet. Ninety-one (95,8%) of the 95 examined samples were found to be rotavirus positive with RT-PCR. Rotavirus subgenotyping distributions in G, P and G/P genotype groups were determined as; G1:45%, G2:27%, G3:13%, G9:13%, G4:1% and G12:1% for G genotype, and P[4]:33%, P[8]:66%, P[10]:1% for P genotype, and G1P[8]:%37, G2P[4]:%21, G3P[8]:%10, G4P[8]:%1, G9P[8]:%8, G2P[8]:%3 for G/P genotype . Not common genotype combination were %20 in G/P genotype. Conclusions: This study subscribes to the global agreement of the molecular epidemiology of rotavirus which will be useful in guiding the alternative and application of rotavirus vaccines or effective control and interception. Determining the diversity and rates of rotavirus genotypes will definitely provide guidelines for developing the most suitable vaccine.Keywords: gastroenteritis, genotyping, rotavirus, RT-PCR
Procedia PDF Downloads 2411240 Delimitation of the Perimeters of PR Otection of the Wellfield in the City of Adrar, Sahara of Algeria through the Used Wyssling’s Method
Authors: Ferhati Ahmed, Fillali Ahmed, Oulhadj Younsi
Abstract:
delimitation of the perimeters of protection in the catchment area of the city of Adrar, which are established around the sites for the collection of water intended for human consumption of drinking water, with the objective of ensuring the preservation and reducing the risks of point and accidental pollution of the resource (Continental Intercalar groundwater of the Northern Sahara of Algeria). This wellfield is located in the northeast of the city of Adrar, it covers an area of 132.56 km2 with 21 Drinking Water Supply wells (DWS), pumping a total flow of approximately 13 Hm3/year. The choice of this wellfield is based on the favorable hydrodynamic characteristics and their location in relation to the agglomeration. The vulnerability to pollution of this slick is very high because the slick is free and suffers from the absence of a protective layer. In recent years, several factors have been introduced around the field that can affect the quality of this precious resource, including the presence of a strong centre for domestic waste and agricultural and industrial activities. Thus, its sustainability requires the implementation of protection perimeters. The objective of this study is to set up three protection perimeters: immediate, close and remote. The application of the Wyssling method makes it possible to calculate the transfer time (t) of a drop of groundwater located at any point in the aquifer up to the abstraction and thus to define isochrones which in turn delimit each type of perimeter, 40 days for the nearer and 100 days for the farther away. Special restrictions are imposed for all activities depending on the distance of the catchment. The application of this method to the Adrar city catchment field showed that the close and remote protection perimeters successively occupy areas of 51.14 km2 and 92.9 km2. Perimeters are delimited by geolocated markers, 40 and 46 markers successively. These results show that the areas defined as "near protection perimeter" are free from activities likely to present a risk to the quality of the water used. On the other hand, on the areas defined as "remote protection perimeter," there is some agricultural and industrial activities that may present an imminent risk. A rigorous control of these activities and the restriction of the type of products applied in industrial and agricultural is imperative.Keywords: continental intercalaire, drinking water supply, groundwater, perimeter of protection, wyssling method
Procedia PDF Downloads 961239 Engaging the World Bank: Good Governance and Human Rights-Based Approaches
Authors: Lottie Lane
Abstract:
It is habitually assumed and stated that the World Bank should engage and comply with international human rights standards. However, the basis for holding the Bank to such standards is unclear. Most advocates of the idea invoke aspects of international law to argue that the Bank has existing obligations to act in compliance with human rights standards. The Bank itself, however, does not appear to accept such arguments, despite having endorsed the importance of human rights for a considerable length of time. A substantial challenge is that under the current international human rights law framework, the World Bank is considered a non-state actor, and as such, has no direct human rights obligations. In the absence of clear legal duties for the Bank, it is necessary to look at the tools available beyond the international human rights framework to encourage the Bank to comply with human rights standards. This article critically examines several bases for arguing that the Bank should comply and engage with human rights through its policies and practices. Drawing on the Bank’s own ‘good governance’ approach as well as the United Nations’ ‘human rights-based-approach’ to development, a new basis is suggested. First, the relationship between the World Bank and human rights is examined. Three perspectives are considered: (1) the legal position – what the status of the World Bank is under international human rights law, and whether it can be said to have existing legal human rights obligations; (2) the Bank’s own official position – how the Bank envisages its relationship with and role in the protection of human rights; and (3) the relationship between the Bank’s policies and practices and human rights (including how its attitudes are reflected in its policies and how the Bank’s operations impact human rights enjoyment in practice). Here, the article focuses on two examples – the (revised) 2016 Environmental and Social Safeguard Policies and the 2012 case-study regarding Gambella, Ethiopia. Both examples are widely considered missed opportunities for the Bank to actively engage with human rights. The analysis shows that however much pressure is placed on the Bank to improve its human rights footprint, it is extremely reluctant to do so explicitly, and the legal bases available are insufficient for requiring concrete, ex ante action by the Bank. Instead, the Bank’s own ‘good governance’ approach to development – which it has been advocating since the 1990s – can be relied upon. ‘Good governance’ has been used and applied by many actors in many contexts, receiving numerous different definitions. This article argues that human rights protection can now be considered a crucial component of good governance, at least in the context of development. In doing so, the article explains the relationship and interdependence between the two concepts, and provides three rationales for the Bank to take a ‘human rights-based approach’ to good governance. Ultimately, this article seeks to look beyond international human rights law and take a governance approach to provide a convincing basis upon which to argue that the World Bank should comply with human rights standards.Keywords: World Bank, international human rights law, good governance, human rights-based approach
Procedia PDF Downloads 3591238 Late Bronze Age Pigments: Characterization of Mycenaean Pottery with Multi-Analytical Approach
Authors: Elif Doğru, Bülent Kızılduman, Huriye İcil
Abstract:
Throughout history, Cyprus has been involved in various commercial and cultural relationships with different civilizations, owing to its strategic location. Particularly during the Late Bronze Age, Cyprus emerged as a significant region engaged in interactions with the Mycenaeans and other Mediterranean civilizations. Presently, findings from archaeological excavations provide valuable insights into Cyprus' cultural history and its connections with other civilizations. Painted Mycenaean ceramics discovered during the excavations at Kaleburnu-Kral Tepesi (Galinaporni-Vasili), dated to the Late Bronze Age in Cyprus, are considered significant archaeological findings that carry traces of the art and culture of that era, reflecting the island's commercial and cultural connections. Considering these findings, there is a need for archaeometric studies to aid in the understanding of the commercial and cultural ties at Kaleburnu-Kral Tepesi. In line with this need, analytical studies have been initiated concerning the provenance and production techniques of the Mycenaean ceramics discovered in the excavations at Kaleburnu-Kral Tepesi, dated to the Late Bronze Age. In the context of origin analysis studies, it is advocated that understanding the techniques and materials used for the figures and designs applied on Mycenaean ceramics would significantly contribute to a better comprehension of historical contexts. Hence, the adopted approach involves not only the analysis of the ceramic raw material but also the characterization of the pigments on the ceramics as a whole. In light of this, in addition to the studies aimed at determining the provenance and production techniques of the Mycenaean ceramic bodies, the characterization of the pigments used in the decorations of the relevant ceramics has been included in the research scope. Accordingly, this study aims to characterize the pigments used in the decorations of Mycenaean ceramics discovered at Kaleburnu-Kral Tepesi, dated to the Late Bronze Age. The X-Ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), and Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDX) methods have been employed to determine the surface morphology and chemical properties of the Mycenaean pigments. The characterization has been conducted through the combination of multiple analytical methods. The characterization of the pigments of Mycenaean ceramics aims to enhance the scientific perspective adopted for understanding the contributions of Mycenaean ceramics found in Cyprus to the island's culture, by providing scientific data on the types and origins of pigments used during the Late Bronze Age.Keywords: mycenaean, ceramic, provenance, pigment
Procedia PDF Downloads 741237 Barriers and Facilitators for Telehealth Use during Cervical Cancer Screening and Care: A Literature Review
Authors: Reuben Mugisha, Stella Bakibinga
Abstract:
The cervical cancer burden is a global threat, but more so in low income settings where more than 85% of mortality cases occur due to lack of sufficient screening programs. There is consequently a lack of early detection of cancer and precancerous cells among women. Studies show that 3% to 35% of deaths could have been avoided through early screening depending on prognosis, disease progression, environmental and lifestyle factors. In this study, a systematic literature review is undertaken to understand potential barriers and facilitators as documented in previous studies that focus on the application of telehealth in cervical cancer screening programs for early detection of cancer and precancerous cells. The study informs future studies especially those from low income settings about lessons learned from previous studies and how to be best prepared while planning to implement telehealth for cervical cancer screening. It further identifies the knowledge gaps in the research area and makes recommendations. Using a specified selection criterion, 15 different articles are analyzed based on the study’s aim, theory or conceptual framework used, method applied, study findings and conclusion. Results are then tabulated and presented thematically to better inform readers about emerging facts on barriers and facilitators to telehealth implementation as documented in the reviewed articles, and how they consequently lead to evidence informed conclusions that are relevant to telehealth implementation for cervical cancer screening. Preliminary findings of this study underscore that use of low cost mobile colposcope is an appealing option in cervical cancer screening, particularly when coupled with onsite treatment of suspicious lesions. These tools relay cervical images to the online databases for storage and retrieval, they permit integration of connected devices at the point of care to rapidly collect clinical data for further analysis of the prevalence of cervical dysplasia and cervical cancer. Results however reveal the need for population sensitization prior to use of mobile colposcopies among patients, standardization of mobile colposcopy programs across screening partners, sufficient logistics and good connectivity, experienced experts to review image cases at the point-of-care as important facilitators to the implementation of mobile colposcope as a telehealth cervical cancer screening mechanism.Keywords: cervical cancer screening, digital technology, hand-held colposcopy, knowledge-sharing
Procedia PDF Downloads 2211236 Development of a Risk Disclosure Index and Examination of Its Determinants: An Empirical Study in Indian Context
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Worldwide regulators, practitioners and researchers view risk-disclosure as one of the most important steps that will promote corporate accountability and transparency. Recognizing this growing significance of risk disclosures, the paper first develops a risk disclosure index. Covering 69 risk items/themes, this index is developed by employing thematic content analysis and encompasses three attributes of disclosure: namely, nature (qualitative or quantitative), time horizon (backward-looking or forward-looking) and tone (no impact, positive impact or negative impact). As the focus of study is on substantive rather than symbolic disclosure, content analysis has been carried out manually. The study is based on non-financial companies of Nifty500 index and covers a ten year period from April 1, 2005 to March 31, 2015, thus yielding 3,872 annual reports for analysis. The analysis reveals that (on an average) only about 14% of risk items (i.e. about 10 out 69 risk items studied) are being disclosed by Indian companies. Risk items that are frequently disclosed are mostly macroeconomic in nature and their disclosures tend to be qualitative, forward-looking and conveying both positive and negative aspects of the concerned risk. The second objective of the paper is to gauge the factors that affect the level of disclosures in annual reports. Given the panel nature of data, and possible endogeneity amongst variables, Diff-GMM regression has been applied. The results indicate that age and size of firms have a significant positive impact on disclosure quality, whereas growth rate does not have a significant impact. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of disclosures. In terms of corporate governance variables, board size, board independence, CEO duality, presence of CRO and constitution of risk management committee appear to be significant factors in determining the quality of risk disclosures. It is noteworthy that the study contributes to literature by putting forth a variant to existing disclosure indices that not only captures the quantity but also the quality of disclosures (in terms of semantic attributes). Also, the study is a first of its kind attempt in a prominent emerging market i.e. India. Therefore, this study is expected to facilitate regulators in mandating and regulating risk disclosures and companies in their endeavor to reduce information asymmetry.Keywords: risk disclosure, voluntary disclosures, corporate governance, Diff-GMM
Procedia PDF Downloads 1621235 Abatement of NO by CO on Pd Catalysts: Influence of the Support in Oxyfuel Combustion Conditions
Authors: Joudia Akil, Stephane Siffert, Laurence Pirault-Roy, Renaud Cousin, Christophe Poupin
Abstract:
The CO2 emitted from anthropic activities is perceived as a constraint in industrial activity due to taxes, stringent environmental regulations, impact on global warming… To limit these CO2 emissions, reuse of CO2 represents a promising alternative, with important applications in chemical industry and for power generation. However, CO2 valorization process requires a gas as pure as possible Oxyfuel-combustion that enables obtaining a CO2 rich stream, with water vapor (10%) is then interesting. Nevertheless to decrease the amount of the by-products found with the CO2 (especially CO and NOx which are harmful to the environment) a catalytic treatment must be applied. Nowadays three-way catalysts are well-developed material for simultaneous conversion of unburned hydrocarbons, carbon monoxide (CO) and nitrogen oxides (NOx). The use of Pd attracted considerable attention on the basis of economic factors (the high cost and scarcity of Pt and Rh). This explains the large number of studies concerning the CO-NO reaction on Pd in the recent years. In the present study, we will compare a series of Pd materials supported on different oxides for CO2 purification from the oxyfuel combustion system, by reducing NO with CO in an oxidizing environment containing CO2 rich stream and presence of 8.2% of water. Al2O3, CeO2, MgO, SiO2 and TiO2 were used as support materials of the catalysts. 1wt% Pd/Support catalysts were obtained by wet impregnation on supports with a precursor of palladium [Pd(acac)2]. The obtained samples were subsequently characterized by H2 chemisorption, BET surface area and TEM. Finally, their catalytic performances were evaluated in CO2 purification which is carried out in a fixed-bed flow reactor containing 150 mg of catalyst at atmospheric pressure. The flow of the reactant gases is composed of: 20% CO2, 10% O2, 0.5% CO, 0.02% NO and 8.2% H2O (He as eluent gas) with a total flow of 200mL.min−1, in the same GHSV. The catalytic performance of the Pd catalysts for CO2 purification revealed that: -The support material has a strong influence on the catalytic activity of 1wt.% Pd supported catalysts. depending of the nature of support, the Pd-based catalysts activity changes. -The highest reduction of NO with CO is obtained in the following ranking: TiO2>CeO2>Al2O3. -The supports SiO2 and MgO should be avoided for this reaction, -Total oxidation of CO occurred over different materials, -CO2 purification can reach 97%, -The presence of H2O has a positive effect on the NO reduction due to the production of the reductant H2 from WGS reaction H2O+CO → H2+CO2Keywords: carbon dioxide, environmental chemistry, heterogeneous catalysis, oxyfuel combustion
Procedia PDF Downloads 2551234 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach
Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf
Abstract:
The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization
Procedia PDF Downloads 1261233 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 3501232 Decision-Making in Higher Education: Case Studies Demonstrating the Value of Institutional Effectiveness Tools
Authors: Carolinda Douglass
Abstract:
Institutional Effectiveness (IE) is the purposeful integration of functions that foster student success and support institutional performance. IE is growing rapidly within higher education as it is increasingly viewed by higher education administrators as a beneficial approach for promoting data-informed decision-making in campus-wide strategic planning and execution of strategic initiatives. Specific IE tools, including, but not limited to, project management; impactful collaboration and communication; commitment to continuous quality improvement; and accountability through rigorous evaluation; are gaining momentum under the auspices of IE. This research utilizes a case study approach to examine the use of these IE tools, highlight successes of this use, and identify areas for improvement in the implementation of IE tools within higher education. The research includes three case studies: (1) improving upon academic program review processes including the assessment of student learning outcomes as a core component of program quality; (2) revising an institutional vision, mission, and core values; and (3) successfully navigating an institution-wide re-accreditation process. Several methods of data collection are embedded within the case studies, including surveys, focus groups, interviews, and document analyses. Subjects of these methods include higher education administrators, faculty, and staff. Key findings from the research include areas of success and areas for improvement in the use of IE tools associated with specific case studies as well as aggregated results across case studies. For example, the use of case management proved useful in all of the case studies, while rigorous evaluation did not uniformly provide the value-added that was expected by higher education decision-makers. The use of multiple IE tools was shown to be consistently useful in decision-making when applied with appropriate awareness of and sensitivity to core institutional culture (for example, institutional mission, local environments and communities, disciplinary distinctions, and labor relations). As IE gains a stronger foothold in higher education, leaders in higher education can make judicious use of IE tools to promote better decision-making and secure improved outcomes of strategic planning and the execution of strategic initiatives.Keywords: accreditation, data-informed decision-making, higher education management, institutional effectiveness tools, institutional mission, program review, strategic planning
Procedia PDF Downloads 1161231 Mesocarbon Microbeads Modification of Stainless-Steel Current Collector to Stabilize Lithium Deposition and Improve the Electrochemical Performance of Anode Solid-State Lithium Hybrid Battery
Authors: Abebe Taye
Abstract:
The interest in enhancing the performance of all-solid-state batteries featuring lithium metal anodes as a potential alternative to traditional lithium-ion batteries has prompted exploration into new avenues. A promising strategy involves transforming lithium-ion batteries into hybrid configurations by integrating lithium-ion and lithium-metal solid-state components. This study is focused on achieving stable lithium deposition and advancing the electrochemical capabilities of solid-state lithium hybrid batteries with anodes by incorporating mesocarbon microbeads (MCMBs) blended with silver nanoparticles. To achieve this, mesocarbon microbeads (MCMBs) blended with silver nanoparticles are coated on stainless-steel current collectors. These samples undergo a battery of analyses employing diverse techniques. Surface morphology is studied through scanning electron microscopy (SEM). The electrochemical behavior of the coated samples is evaluated in both half-cell and full-cell setups utilizing an argyrodite-type sulfide electrolyte. The stability of MCMBs in the electrolyte is assessed using electrochemical impedance spectroscopy (EIS). Additional insights into the composition are gleaned through X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, and energy-dispersive X-ray spectroscopy (EDS). At an ultra-low N/P ratio of 0.26, stability is upheld for over 100 charge/discharge cycles in half-cells. When applied in a full-cell configuration, the hybrid anode preserves 60.1% of its capacity after 80 cycles at 0.3 C under a low N/P ratio of 0.45. In sharp contrast, the capacity retention of the cell using untreated MCMBs declines to 20.2% after a mere 60 cycles. The introduction of mesocarbon microbeads (MCMBs) combined with silver nanoparticles into the hybrid anode of solid-state lithium batteries substantially elevates their stability and electrochemical performance. This approach ensures consistent lithium deposition and removal, mitigating dendrite growth and the accumulation of inactive lithium. The findings from this investigation hold significant value in elevating the reversibility and energy density of lithium-ion batteries, thereby making noteworthy contributions to the advancement of more efficient energy storage systems.Keywords: MCMB, lithium metal, hybrid anode, silver nanoparticle, cycling stability
Procedia PDF Downloads 751230 Expression of miRNA 335 in Gall Bladder Cancer: A Correlative Study
Authors: Naseem Fatima, A. N. Srivastava, Tasleem Raza, Vijay Kumar
Abstract:
Introduction: Carcinoma gallbladder is third most common gastrointestinal lethal disease with the highest incidence and mortality rate among women in Northern India. Scientists have found several risk factors that make a person more likely to develop gallbladder cancer; among these risk factors, deregulation of miRNAs has been demonstrated to be one of the most crucial factors. The changes in the expression of specific miRNA genes result in the control of inflammation, cell cycle regulation, stress response, proliferation, differentiation, apoptosis and invasion thus mediate the process in tumorgenesis. The aim of this study was to investigate the role of MiRNA-335 and may as a molecular marker in early detection of gallbladder cancer in suspected cases. Material and Methods: A total of 20 consecutive patients with gallbladder cancer aged between 30-75 years were registered for the study. Total RNA was extracted from tissue by using the mirVANA MiRNA isolation Kit according to the manufacturer’s protocol. The MiRNA- 335 and U6 snRNA-specific cDNA were reverse-transcribed from total RNA using Taqman microRNA reverse-transcription kit according to the manufacturer’s protocol. TaqMan MiRNA probes hsa-miR-335 and Taqman Master Mix without AmpEase UNG, Individual real-time PCR assays were performed in a 20 μL reaction volume on a Real-Time PCR system (Applied Biosystems StepOnePlus™) to detect MiRNA-335 expression in tissue. Relative quantification of target MiRNA expression was evaluated using the comparative cycle threshold (CT) method. The correlation was done in between cycle threshold (CT Value) of target MiRNA in gallbladder cancer with respect to non-cancerous Cholelithiasis gallbladder. Each sample was examined in triplicate. The Newman-Keuls Multiple Comparison Test was used to determine the expression of miR-335. Results: MiRNA335 was found to be significantly downregulated in the gallbladder cancer tissue (P<0.001), when compared with non-cancerous Cholelithiasis gallbladder cases. Out of 20 cases, 75% showed reduced expression of MiRNA335, were at last stage of disease with low overall survival rate and remaining 25% were showed up-regulated expression of MiRNA335 with high survival rate. Conclusion: The present study showed that reduced expression of MiRNA335 is associated with the advancement of the disease, and its deregulation may provide important clues to understanding it as a prognostic marker and opportunities for future research.Keywords: carcinoma gallbladder, downregulation, MiRNA-335, RT-PCR assay
Procedia PDF Downloads 3601229 Management of Nutrition Education in Spa Resorts in Poland
Authors: Joanna Wozniak-Holecka, Sylwia Jaruga-Sekowska
Abstract:
There are 45 statutory spa and treatment areas in Poland, and the demand for spa and treatment services increases year by year. Within each type of spa treatment facilities, nutritional education services are provided. During spa treatment, the patient learns the principles of rational nutrition and applied diet therapy. It should help him develop proper eating habits, which will also follow at home. However, the nutrition education system of spa resort patients should be considered as very imperfect and requiring a definite systemic correction. It has, at the same time, a wide human and infrastructure base, which guarantees to obtain positive reinforcement in the scope of undertaken activities and management. Unfortunately, this advantage is not fully used. The aim of the project was to assess the quality of implemented nutritional education and to assess the diet of patients in spa treatment entities from a nationwide perspective. The material for the study was data obtained as part of an in-depth interview conducted among nutrition department managers (25 interviews) and a survey addressed to patients (600 questionnaires) of a selected group of spa resorts from across the country about the implementation of nutritional education in institutions. Also, decade menus for the basic diet, easily digestible diet and diet with limitation of easily digestible carbohydrates (a total of 1,120 menus) were obtained for the study. Almost 2/3 of respondents (73.2%) were overweight or obese, but only 32.8% decided on an easily digestible or low-energy diet during the treatment. Most of the surveyed patients rated the nutrition in spa resorts as satisfactory. Classes on nutrition education were carried out mainly by a dietitian (65% of meetings), the other educators were doctors and nurses. The meetings (95%) were of a group nature and lasted only 30 minutes on average. The subjects of the classes concerned the principles of proper nutrition and composition of meals, a nutrition pyramid and a diet adapted to a given disease. The assessed menus did not meet the nutrition standards and, therefore, did not provide patients with the correct quality of nutrition. The norm of protein, fat, vitamin A, B12, phosphorus, iron and sodium was exceeded, while vitamin D, folic acid, magnesium and zinc were not enough than recommended. The study allowed to conclude that there is a large discrepancy between the recommendations presented during the nutrition education classes and the quality of diet implemented in the examined institutions. The project may contribute to the development of effective educational tools in nutrition, especially about a specific group of chronically ill patients.Keywords: diet, management, nutritional education, spa resort
Procedia PDF Downloads 144