Search results for: food system model
1360 CO₂ Conversion by Low-Temperature Fischer-Tropsch
Authors: Pauline Bredy, Yves Schuurman, David Farrusseng
Abstract:
To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process
Procedia PDF Downloads 581359 The Regulation of Alternative Dispute Resolution Institutions in Consumer Redress and Enforcement: A South African Perspective
Authors: Jacolien Barnard, Corlia Van Heerden
Abstract:
Effective and accessible consensual dispute resolution and in particular alternative dispute resolution, are central to consumer protection legislation. In this regard, the Consumer Protection Act 68 of 2008 (CPA) of South Africa is no exception. Due to the nature of consumer disputes, alternative dispute resolution (in theory) is an effective vehicle for the adjudication of disputes in a timely manner avoiding overburdening of the courts. The CPA sets down as one of its core purposes the provision of ‘an accessible, consistent, harmonized, effective and efficient system of redress for consumers’ (section 3(1)(h) of the CPA). Section 69 of the Act provides for the enforcement of consumer rights and provides for the National Consumer Commission to be the Central Authority which streamlines, adjudicates and channels disputes to the appropriate forums which include Alternative Dispute Resolution Agents (ADR-agents). The purpose of this paper is to analyze the regulation of these enforcement and redress mechanisms with particular focus on the Central Authority as well as the ADR-agents and their crucial role in successful and efficient adjudication of disputes in South Africa. The South African position will be discussed comparatively with the European Union (EU) position. In this regard, the European Union (EU) Directive on Alternative Dispute Resolution for Consumer Disputes (2013/11/EU) will be discussed (The ADR Directive). The aim of the ADR Directive is to solve contractual disputes between consumers and traders (suppliers or businesses) regardless of whether the agreement was concluded offline or online or whether or not the trader is situated in another member state (Recitals 4-6). The ADR Directive provides for a set of quality requirements that an ADR body or entity tasked with resolving consumer disputes should adhere to in member states which include regulatory mechanisms for control. Transparency, effectiveness, fairness, liberty and legality are all requirements for a successful ADR body and discussed within this chapter III of the Directive. Chapters III and IV govern the importance of information and co-operation. This includes information between ADR bodies and the European Commission (EC) but also between ADR bodies or entities and national authorities enforcing legal acts on consumer protection and traders. (In South Africa the National Consumer Tribunal, Provincial Consumer Protectors and Industry ombuds come to mind). All of which have a responsibility to keep consumers informed. Ultimately the papers aims to provide recommendations as to the successfulness of the current South African position in light of the comparative position in Europe and the highlight the importance of proper regulation of these redress and enforcement institutions.Keywords: alternative dispute resolution, consumer protection law, enforcement, redress
Procedia PDF Downloads 2331358 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 1691357 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean
Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama
Abstract:
Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.Keywords: coral environment, marine microbiology, network analysis, omics data analysis
Procedia PDF Downloads 2541356 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations
Authors: Mustafa Malki, Ewan R. Pearson
Abstract:
Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics
Procedia PDF Downloads 1631355 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull
Authors: Elsher Lawson-Boyd
Abstract:
In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.Keywords: genetics, mental illness, neuroepigenetics, trauma
Procedia PDF Downloads 1251354 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 1781353 The Good Form of a Sustainable Creative Learning City Based on “The Theory of a Good City Form“ by Kevin Lynch
Authors: Fatemeh Moosavi, Tumelo Franck Nkoshwane
Abstract:
Peter Drucker the renowned management guru once said, “The best way to predict the future is to create it.” Mr. Drucker is also the man who placed human capital as the most vital resource of any institution. As such any institution bent on creating a better future, requires a competent human capital, one that is able to execute with efficiency and effectiveness the objective a society aspires to. Technology today is accelerating the rate at which many societies transition to knowledge based societies. In this accelerated paradigm, it is imperative that those in leadership establish a platform capable of sustaining the planned future; intellectual capital. The capitalist economy going into the future will not just be sustained by dollars and cents, but by individuals who possess the creativity to enterprise, innovate and create wealth from ideas. This calls for cities of the future, to have this premise at the heart of their future plan, if the objective of designing sustainable and liveable future cities will be realised. The knowledge economy, now transitioning to the creative economy, requires cities of the future to be ‘gardens’ of inspiration, to be places where knowledge, creativity, and innovation can thrive as these instruments are becoming critical assets for creating wealth in the new economic system. Developing nations must accept that learning is a lifelong process that requires keeping abreast with change and should invest in teaching people how to keep learning. The need to continuously update one’s knowledge, turn these cities into vibrant societies, where new ideas create knowledge and in turn enriches the quality of life of the residents. Cities of the future must have as one of their objectives, the ability to motivate their citizens to learn, share knowledge, evaluate the knowledge and use it to create wealth for a just society. The five functional factors suggested by Kevin Lynch;-vitality, meaning/sense, adaptability, access, control, and monitoring should form the basis on which policy makers and urban designers base their plans for future cities. The authors of this paper believe that developing nations “creative economy clusters”, cities where creative industries drive the need for constant new knowledge creating sustainable learning creative cities. Obviously the form, shape and size of these districts should be cognisant of the environmental, cultural and economic characteristics of each locale. Gaborone city in the republic of Botswana is presented as the case study for this paper.Keywords: learning city, sustainable creative city, creative industry, good city form
Procedia PDF Downloads 3101352 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 1701351 Job Resource, Personal Resource, Engagement and Performance with Balanced Score Card in the Integrated Textile Companies in Indonesia
Authors: Nurlaila Effendy
Abstract:
Companies in Asia face a number of constraints in tight competitiveness in ASEAN Economic Community 2015 and globalization. An economic capitalism system as an integral part of globalization processing brings broad impacts. They need to improve business performance in globalization and ASEAN Economic Community. Organizational development has quite clearly demonstrated that aligning individual’s personal goals with the goals of the organization translates into measurable and sustained performance improvement. Human capital is a key to achieve company performance. Employee Engagement (EE) creates and expresses themselves physically, cognitively and emotionally to achieve company goals and individual goals. One will experience a total involvement when they undertake their jobs and feel a self integration to their job and organization. A leader plays key role in attaining the goals and objectives of a company/organization. Any Manager in a company needs to have leadership competence and global mindset. As one the of positive organizational behavior developments, psychological capital (PsyCap) is assumed to be one of the most important capitals in the global mindset, in addition to intellectual capital and social capital. Textile companies also need to face a number of constraints in tight competitiveness in regional and global. This research involved 42 managers in two textiles and a spinning companies in a group, in Central Java, Indonesia. It is a quantitative research with Partial Least Squares (PLS) studying job resource (Social Support & Organizational Climate) and Personal Resource (4 dimensions of Psychological Capital & Leadership Competence) as prediction of Employee Engagement, also Employee Engagement and leadership competence as prediction of leader’s performance. The performance of a leader is measured by means of achievement on objective strategies in terms of 4 perspectives (financial and non-financial perspectives) in a Balanced Score Card (BSC). It took one year during a business plan of year 2014, from January to December 2014. The result of this research is there is correlation between Job Resource (coefficient value of Social Support is 0.036 & coefficient value of organizational climate is 0.220) and Personal Resource (coefficient value of PsyCap is 0.513 & coefficient value of Leadership Competence is 0.249) with employee engagement. There is correlation between employee engagement (coefficient value is 0.279) and leadership competence (coefficient value is 0.581) with performance.Keywords: organizational climate, social support, psychological capital leadership competence, employee engagement, performance, integrated textile companies
Procedia PDF Downloads 4331350 Effects of Ubiquitous 360° Learning Environment on Clinical Histotechnology Competence
Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen
Abstract:
Rapid technological development and digitalization has affected also on higher education. During last twenty years multiple of electronic and mobile learning (e-learning, m-learning) platforms have been developed and have become prevalent in many universities and in the all fields of education. Ubiquitous learning (u-learning) is not that widely known or used. Ubiquitous learning environments (ULE) are the new era of computer-assisted learning. They are based on ubiquitous technology and computing that fuses the learner seamlessly into learning process by using sensing technology as tags, badges or barcodes and smart devices like smartphones and tablets. ULE combines real-life learning situations into virtual aspects and can be flexible used in anytime and anyplace. The aim of this study was to assess the effects of ubiquitous 360 o learning environment on higher education students’ clinical histotechnology competence. A quasi-experimental study design was used. 57 students in biomedical laboratory science degree program was assigned voluntarily to experiment (n=29) and to control group (n=28). Experimental group studied via ubiquitous 360o learning environment and control group via traditional web-based learning environment (WLE) in a 8-week educational intervention. Ubiquitous 360o learning environment (ULE) combined authentic learning environment (histotechnology laboratory), digital environment (virtual laboratory), virtual microscope, multimedia learning content, interactive communication tools, electronic library and quick response barcodes placed into authentic laboratory. Web-based learning environment contained equal content and components with the exception of the use of mobile device, interactive communication tools and quick response barcodes. Competence of clinical histotechnology was assessed by using knowledge test and self-report developed for this study. Data was collected electronically before and after clinical histotechnology course and analysed by using descriptive statistics. Differences among groups were identified by using Wilcoxon test and differences between groups by using Mann-Whitney U-test. Statistically significant differences among groups were identified in both groups (p<0.001). Competence scores in post-test were higher in both groups, than in pre-test. Differences between groups were very small and not statistically significant. In this study the learning environment have developed based on 360o technology and successfully implemented into higher education context. And students’ competence increases when ubiquitous learning environment were used. In the future, ULE can be used as a learning management system for any learning situation in health sciences. More studies are needed to show differences between ULE and WLE.Keywords: competence, higher education, histotechnology, ubiquitous learning, u-learning, 360o
Procedia PDF Downloads 2861349 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1551348 ‘Transnationalism and the Temporality of Naturalized Citizenship
Authors: Edward Shizha
Abstract:
Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship
Procedia PDF Downloads 751347 Ensuring Sustainable Urban Mobility in Indian Cities: Need for Creating People Friendly Roadside Public Spaces
Authors: Pushplata Garg
Abstract:
Mobility, is an integral part of living and sustainability of urban mobility, is essential not only for, but also for addressing global warming and climate change. However, very little is understood about the obstacles/hurdles and likely challenges in the success of plans for sustainable urban mobility in Indian cities from the public perspective. Whereas some of the problems and issues are common to all cities, others vary considerably with financial status, function, the size of cities and culture of a place. Problems and issues similar in all cities relate to availability, efficiency and safety of public transport, last mile connectivity, universal accessibility, and essential planning and design requirements of pedestrians and cyclists are same. However, certain aspects like the type of means of public transportation, priority for cycling and walking, type of roadside activities, are influenced by the size of the town, average educational and income level of public, financial status of the local authorities, and culture of a place. The extent of public awareness, civic sense, maintenance of public spaces and law enforcement vary significantly from large metropolitan cities to small and medium towns in countries like India. Besides, design requirements for shading, location of public open spaces and sitting areas, street furniture, landscaping also vary depending on the climate of the place. Last mile connectivity plays a major role in success/ effectiveness of public transport system in a city. In addition to the provision of pedestrian footpaths connecting important destinations, sitting spaces and necessary amenities/facilities along footpaths; pedestrian movement to public transit stations is encouraged by the presence of quality roadside public spaces. It is not only the visual attractiveness of streetscape or landscape or the public open spaces along pedestrian movement channels but the activities along that make a street vibrant and attractive. These along with adequate spaces to rest and relax encourage people to walk as is observed in cities with successful public transportation systems. The paper discusses problems and issues of pedestrians for last mile connectivity in the context of Delhi, Chandigarh, Gurgaon, and Roorkee- four Indian cities representing varying urban contexts, that is, of metropolitan, large and small cities.Keywords: pedestrianisation, roadside public spaces, last mile connectivity, sustainable urban mobility
Procedia PDF Downloads 2511346 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat
Authors: Bader Eddin Al Asali, N. Srinivasa Reddy
Abstract:
Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters
Procedia PDF Downloads 2701345 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices
Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto
Abstract:
The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem
Procedia PDF Downloads 1191344 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition
Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang
Abstract:
This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.Keywords: carbon dioxide, firmness, shelf-life, visual quality
Procedia PDF Downloads 3991343 Performance Improvement of a Single-Flash Geothermal Power Plant Design in Iran: Combining with Gas Turbines and CHP Systems
Authors: Morteza Sharifhasan, Davoud Hosseini, Mohammad. R. Salimpour
Abstract:
The geothermal energy is considered as a worldwide important renewable energy in recent years due to rising environmental pollution concerns. Low- and medium-grade geothermal heat (< 200 ºC) is commonly employed for space heating and in domestic hot water supply. However, there is also much interest in converting the abundant low- and medium-grade geothermal heat into electrical power. The Iranian Ministry of Power - through the Iran Renewable Energy Organization (SUNA) – is going to build the first Geothermal Power Plant (GPP) in Iran in the Sabalan area in the Northwest of Iran. This project is a 5.5 MWe single flash steam condensing power plant. The efficiency of GPPs is low due to the relatively low pressure and temperature of the saturated steam. In addition to GPPs, Gas Turbines (GTs) are also known by their relatively low efficiency. The Iran ministry of Power is trying to increase the efficiency of these GTs by adding bottoming steam cycles to the GT to form what is known as combined gas/steam cycle. One of the most effective methods for increasing the efficiency is combined heat and power (CHP). This paper investigates the feasibility of superheating the saturated steam that enters the steam turbine of the Sabalan GPP (SGPP-1) to improve the energy efficiency and power output of the GPP. This purpose is achieved by combining the GPP with two 3.5 MWe GTs. In this method, the hot gases leaving GTs are utilized through a superheater similar to that used in the heat recovery steam generator of combined gas/steam cycle. Moreover, brine separated in the separator, hot gases leaving GTs and superheater are used for the supply of domestic hot water (in this paper, the cycle combined of GTs and CHP systems is named the modified SGPP-1) . In this research, based on the Heat Balance presented in the basic design documents of the SGPP-1, mathematical/numerical model of the power plant are developed together with the mentioned GTs and CHP systems. Based on the required hot water, the amount of hot gasses needed to pass through CHP section directly can be adjusted. For example, during summer when hot water is less required, the hot gases leaving both GTs pass through the superheater and CHP systems respectively. On the contrary, in order to supply the required hot water during the winter, the hot gases of one of the GTs enter the CHP section directly, without passing through the super heater section. The results show that there is an increase in thermal efficiency up to 40% through using the modified SGPP-1. Since the gross efficiency of SGPP-1 is 9.6%, the achieved increase in thermal efficiency is significant. The power output of SGPP-1 is increased up to 40% in summer (from 5.5MW to 7.7 MW) while the GTs power output remains almost unchanged. Meanwhile, the combined-cycle power output increases from the power output of the two separate plants of 12.5 MW [5.5+ (2×3.5)] to the combined-cycle power output of 14.7 [7.7+(2×3.5)]. This output is more than 17% above the output of the two separate plants. The modified SGPP-1 is capable of producing 215 T/Hr hot water ( 90 ºC ) for domestic use in the winter months.Keywords: combined cycle, chp, efficiency, gas turbine, geothermal power plant, gas turbine, power output
Procedia PDF Downloads 3221342 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy
Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly
Abstract:
In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening
Procedia PDF Downloads 721341 Wave Powered Airlift PUMP for Primarily Artificial Upwelling
Authors: Bruno Cossu, Elio Carlo
Abstract:
The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter
Procedia PDF Downloads 1481340 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)
Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro
Abstract:
In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.Keywords: XBRL, financial reporting, internet, internal and external reports
Procedia PDF Downloads 2861339 Investigation of Permeate Flux through DCMD Module by Inserting S-Ribs Carbon-Fiber Promoters with Ascending and Descending Hydraulic Diameters
Authors: Chii-Dong Ho, Jian-Har Chen
Abstract:
The decline in permeate flux across membrane modules is attributed to the increase in temperature polarization resistance in flat-plate Direct Contact Membrane Distillation (DCMD) modules for pure water productivity. Researchers have discovered that this effect can be diminished by embedding turbulence promoters, which augment turbulence intensity at the cost of increased power consumption, thereby improving vapor permeate flux. The device performance of DCMD modules for permeate flux was further enhanced by shrinking the hydraulic diameters of inserted S-ribs carbon-fiber promoters as well as considering the energy consumption increment. The mass-balance formulation, based on the resistance-in-series model by energy conservation in one-dimensional governing equations, was developed theoretically and conducted experimentally on a flat-plate polytetrafluoroethylene/polypropylene (PTFE/PP) membrane module to predict permeate flux and temperature distributions. The ratio of permeate flux enhancement to energy consumption increment, as referred to an assessment on economic viewpoint and technical feasibilities, was calculated to determine the suitable design parameters for DCMD operations with the insertion of S-ribs carbon-fiber turbulence promoters. An economic analysis was also performed, weighing both permeate flux improvement and energy consumption increment on modules with promoter-filled channels by different array configurations and various hydraulic diameters of turbulence promoters. Results showed that the ratio of permeate flux improvement to energy consumption increment in descending hydraulic-diameter modules is higher than in uniform hydraulic-diameter modules. The fabrication details of the DCMD module filaments implementing the S-ribs carbon-fiber filaments and the schematic configuration of the flat-plate DCMD experimental setup with presenting acrylic plates as external walls were demonstrated in the present study. The S-ribs carbon fibers perform as turbulence promoters incorporated into the artificial hot saline feed stream, which was prepared by adding inorganic salts (NaCl) to distilled water. Theoretical predictions and experimental results exhibited a great accomplishment to considerably achieve permeate flux enhancement, such as the new design of the DCMD module with inserting S-ribs carbon-fiber promoters. Additionally, the Nusselt number for the water vapor transferring membrane module with inserted S-ribs carbon-fiber promoters was generalized into a simplified expression to predict the heat transfer coefficient and permeate flux as well.Keywords: permeate flux, Nusselt number, DCMD module, temperature polarization, hydraulic diameters
Procedia PDF Downloads 81338 Mitochondrial DNA Defect and Mitochondrial Dysfunction in Diabetic Nephropathy: The Role of Hyperglycemia-Induced Reactive Oxygen Species
Authors: Ghada Al-Kafaji, Mohamed Sabry
Abstract:
Mitochondria are the site of cellular respiration and produce energy in the form of adenosine triphosphate (ATP) via oxidative phosphorylation. They are the major source of intracellular reactive oxygen species (ROS) and are also direct target to ROS attack. Oxidative stress and ROS-mediated disruptions of mitochondrial function are major components involved in the pathogenicity of diabetic complications. In this work, the changes in mitochondrial DNA (mtDNA) copy number, biogenesis, gene expression of mtDNA-encoded subunits of electron transport chain (ETC) complexes, and mitochondrial function in response to hyperglycemia-induced ROS and the effect of direct inhibition of ROS on mitochondria were investigated in an in vitro model of diabetic nephropathy using human renal mesangial cells. The cells were exposed to normoglycemic and hyperglycemic conditions in the presence and absence of Mn(III)tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP) or catalase for 1, 4 and 7 days. ROS production was assessed by the confocal microscope and flow cytometry. mtDNA copy number and PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 transcripts, were all analyzed by real-time PCR. PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 proteins, were analyzed by Western blotting. Mitochondrial function was determined by assessing mitochondrial membrane potential and adenosine triphosphate (ATP) levels. Hyperglycemia-induced a significant increase in the production of mitochondrial superoxide and hydrogen peroxide at day 1 (P < 0.05), and this increase remained significantly elevated at days 4 and 7 (P < 0.05). The copy number of mtDNA and expression of PGC-1a, NRF-1, and TFAM as well as ND2, CYTB, CO1 and ATPase 6 increased after one day of hyperglycemia (P < 0.05), with a significant reduction in all those parameters at 4 and 7 days (P < 0.05). The mitochondrial membrane potential decreased progressively at 1 to 7 days of hyperglycemia with the parallel progressive reduction in ATP levels over time (P < 0.05). MnTBAP and catalase treatment of cells cultured under hyperglycemic conditions attenuated ROS production reversed renal mitochondrial oxidative stress and improved mtDNA, mitochondrial biogenesis, and function. These results show that hyperglycemia-induced ROS caused an early increase in mtDNA copy number, mitochondrial biogenesis and mtDNA-encoded gene expression of the ETC subunits in human mesangial cells as a compensatory response to the decline in mitochondrial function, which precede the mtDNA defect and mitochondrial dysfunction with a progressive oxidative response. Protection from ROS-mediated damage to renal mitochondria induced by hyperglycemia may be a novel therapeutic approach for the prevention/treatment of DN.Keywords: diabetic nephropathy, hyperglycemia, reactive oxygen species, oxidative stress, mtDNA, mitochondrial dysfunction, manganese superoxide dismutase, catalase
Procedia PDF Downloads 2471337 Recent Policy Changes in Israeli Early Childhood Frameworks: Hope for the Future
Authors: Yaara Shilo
Abstract:
Early childhood education and care (ECEC)in Israel has undergone extensive reform and now requires daycare centers to meet internationally recognized professional standards. Since 1948, one of the aims of childcare facilities was to enable women’s participation in the workforce.A 1965 law grouped daycare centers for young children with facilities for the elderly and for disabled persons under the same authority. In the 1970’s, ECEC leaders sought to change childcare from proprietary to educational facilities. From 1976 deliberations in the Knesset regarding appropriate attribution of ECEC frameworks resulted in their being moved to various authorities that supported women’s employment: Ministries of Finance, Industry, and Commerce, as well as the Welfare Department. Prior to 2018, 75% of infants and toddlers in institutional care were in unlicensed and unsupervised settings. Legislative processes accompanied the conceptual change to an eventual appropriate attribution of ECEC frameworks. Position papers over the past two decades resulted in recommendations for standards conforming to OECD regulations. Simultaneous incidents of child abuse, some resulting in death, riveted public attention to the need for adequate government supervision, accelerating the legislative process. Appropriate care for very young children must center on quality interactions with caregivers, thus requiring adequate staff training. Finally, in 2018 a law was passed stipulating standards for staff training, proper facilities, child-adult ratios, and safety measures. The Ariav commission expanded training to caregivers for ages 0-3. Transfer of the ECEC to the Ministry of Education ensured establishment of basic training. Groundwork created by new legislation initiated professional development of EC educators for ages 0-3. This process should raise salaries and bolster the system’s ability to attract quality employees. In 2022 responsibility for ECEC ages 0-3 was transferred from the Ministry of Finance to the Ministry of Education, shifting emphasis from proprietary care to professional considerations focusing on wellbeing and early childhood education. The recent revolutionary changes in ECEC point to a new age in the care and education of Israel’s youngest citizens. Implementation of international standards, adequate training, and professionalization of the workforce focus on the child’s needs.Keywords: policy, early childhood, care and education, daycare, development
Procedia PDF Downloads 1151336 Re-Orienting Fashion: Fashionable Modern Muslim Women beyond Western Modernity
Authors: Amany Abdelrazek
Abstract:
Fashion is considered the main feature of modern and postmodern capitalist and consumerist society. Consumer historians maintain that fashion, namely, a sector of people embracing a prevailing clothing style for a short period, started during the Middle Ages but gained popularity later. It symbolised the transition from a medieval society with its solid fixed religious values into a modern society with its secular consumer dynamic culture. Renaissance society was a modern secular society concerning its preoccupation with daily life and changing circumstances. Yet, the late 18th-century industrial revolution revolutionised thought and ideology in Europe. The Industrial Revolution reinforced the Western belief in rationality and strengthened the position of science. In such a rational Western society, modernity, with its new ideas, came to challenge the whole idea of old fixed norms, reflecting the modern secular, rational culture and renouncing the medieval pious consumer. In modern society, supported by the industrial revolution and mass production, fashion encouraged broader sectors of society to integrate into fashion reserved for the aristocracy and royal courts. Moreover, the fashion project emphasizes the human body and its beauty, contradicting Judeo-Christian culture, which tends to abhor and criticize interest in sensuality and hedonism. In mainstream Western discourse, fashionable dress differentiates between emancipated stylish consumerist secular modern female and the assumed oppressed traditional modest religious female. Opposing this discourse, I look at the controversy over what has been called "Islamic fashion" that started during the 1980s and continued to gain popularity in contemporary Egyptian society. I discuss the challenges of being a fashionable and Muslim practicing female in light of two prominent models for female "Islamic fashion" in postcolonial Egypt; Jasmin Mohshen, the first hijabi model in Egypt and Manal Rostom, the first Muslim woman to represent the Nike campaign in the Middle East. The research employs fashion and postcolonial theories to rethink current Muslim women's position on women's emancipation, Western modernity and practising faith in postcolonial Egypt. The paper argues that Muslim women's current innovative and fashionable dress can work as a counter-discourse to the Orientalist and exclusive representation of non-Western Muslim culture as an inherently inert timeless culture. Furthermore, "Islamic" fashionable dress as an aesthetic medium for expressing ideas and convictions in contemporary Egypt interrogates the claim of universal secular modernity and Western fashion theorists' reluctance to consider Islamic fashion as fashion.Keywords: fashion, muslim women, modernity, secularism
Procedia PDF Downloads 1291335 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 3011334 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1501333 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques
Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo
Abstract:
Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.Keywords: air pollution, air quality modelling, data mining, particulate matter
Procedia PDF Downloads 2581332 Energy Efficiency Measures in Canada’s Iron and Steel Industry
Authors: A. Talaei, M. Ahiduzzaman, A. Kumar
Abstract:
In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation
Procedia PDF Downloads 3971331 Modification of Unsaturated Fatty Acids Derived from Tall Oil Using Micro/Mesoporous Materials Based on H-ZSM-22 Zeolite
Authors: Xinyu Wei, Mingming Peng, Kenji Kamiya, Eika Qian
Abstract:
Iso-stearic acid as a saturated fatty acid with a branched chain shows a low pour point, high oxidative stability and great biodegradability. The industrial production of iso-stearic acid involves first isomerizing unsaturated fatty acids into branched-chain unsaturated fatty acids (BUFAs), followed by hydrogenating the branched-chain unsaturated fatty acids to obtain iso-stearic acid. However, the production yield of iso-stearic acid is reportedly less than 30%. In recent decades, extensive research has been conducted on branched fatty acids. Most research has replaced acidic clays with zeolites due to their high selectivity, good thermal stability, and renewability. It was reported that isomerization of unsaturated fatty acid occurred mainly inside the zeolite channel. In contrast, the production of by-products like dimer acid mainly occurs at acid sites outside the surface of zeolite. Further, the deactivation of catalysts is attributed to the pore blockage of zeolite. In the present study, micro/mesoporous ZSM-22 zeolites were developed. It is clear that the synthesis of a micro/mesoporous ZSM-22 zeolite is regarded as the ideal strategy owing to its ability to minimize coke formation. Different mesoporosities micro/mesoporous H-ZSM-22 zeolites were prepared through recrystallization of ZSM-22 using sodium hydroxide solution (0.2-1M) with cetyltrimethylammonium bromide template (CTAB). The structure, morphology, porosity, acidity, and isomerization performance of the prepared catalysts were characterized and evaluated. The dissolution and recrystallization process of the H-ZSM-22 microporous zeolite led to the formation of approximately 4 nm-sized mesoporous channels on the outer surface of the microporous zeolite, resulting in a micro/mesoporous material. This process increased the weak Brønsted acid sites at the pore mouth while reducing the total number of acid sites in ZSM-22. Finally, an activity test was conducted using oleic acid as a model compound in a fixed-bed reactor. The activity test results revealed that micro/mesoporous H-ZSM-22 zeolites exhibited a high isomerization activity, reaching >70% selectivity and >50% yield of BUFAs. Furthermore, the yield of oligomers was limited to less than 20%. This demonstrates that the presence of mesopores in ZSM-22 enhances contact between the feedstock and the active sites within the catalyst, thereby increasing catalyst activity. Additionally, a portion of the dissolved and recrystallized silica adhered to the catalyst's surface, covering the surface-active sites, which reduced the formation of oligomers. This study offers distinct insights into the production of iso-stearic acid using a fixed-bed reactor, paving the way for future research in this area.Keywords: Iso-stearic acid, oleic acid, skeletal isomerization, micro/mesoporous, ZSM-22
Procedia PDF Downloads 23