Search results for: demand of waterway transport passengers
231 Biotechnological Interventions for Crop Improvement in Nutricereal Pearl Millet
Authors: Supriya Ambawat, Subaran Singh, C. Tara Satyavathi, B. S. Rajpurohit, Ummed Singh, Balraj Singh
Abstract:
Pearl millet [Pennisetum glaucum (L.) R. Br.] is an important staple food of the arid and semiarid tropical regions of Asia, Africa, and Latin America. It is rightly termed as nutricereal as it has high nutrition value and a good source of carbohydrate, protein, fat, ash, dietary fiber, potassium, magnesium, iron, zinc, etc. Pearl millet has low prolamine fraction and is gluten free which is useful for people having a gluten allergy. It has several health benefits like reduction in blood pressure, thyroid, diabe¬tes, cardiovascular and celiac diseases but its direct consumption as food has significantly declined due to several reasons. Keeping this in view, it is important to reorient the ef¬forts to generate demand through value-addition and quality improvement and create awareness on the nutritional merits of pearl millet. In India, through Indian Council of Agricultural Research-All India Coordinated Research Project on Pearl millet, multilocational coordinated trials for developed hybrids were conducted at various centers. The gene banks of pearl millet contain varieties with high levels of iron and zinc which were used to produce new pearl millet varieties with elevated iron levels bred with the high‐yielding varieties. Thus, using breeding approaches and biochemical analysis, a total of 167 hybrids and 61 varieties were identified and released for cultivation in different agro-ecological zones of the country which also includes some biofortified hybrids rich in Fe and Zn. Further, using several biotechnological interventions such as molecular markers, next-generation sequencing (NGS), association mapping, nested association mapping (NAM), MAGIC populations, genome editing, genotyping by sequencing (GBS), genome wide association studies (GWAS) advancement in millet improvement has become possible by identifying and tagging of genes underlying a trait in the genome. Using DArT markers very high density linkage maps were constructed for pearl millet. Improved HHB67 has been released using marker assisted selection (MAS) strategies, and genomic tools were used to identify Fe-Zn Quantitative Trait Loci (QTL). The draft genome sequence of millet has also opened various ways to explore pearl millet. Further, genomic positions of significantly associated simple sequence repeat (SSR) markers with iron and zinc content in the consensus map is being identified and research is in progress towards mapping QTLs for flour rancidity. The sequence information is being used to explore genes and enzymatic pathways responsible for rancidity of flour. Thus, development and application of several biotechnological approaches along with biofortification can accelerate the genetic gain targets for pearl millet improvement and help improve its quality.Keywords: Biotechnological approaches, genomic tools, malnutrition, MAS, nutricereal, pearl millet, sequencing.
Procedia PDF Downloads 185230 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 121229 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 274228 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence
Authors: Sogand Barghi
Abstract:
The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting
Procedia PDF Downloads 71227 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling
Authors: Alastair Hales, Xi Jiang
Abstract:
Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics
Procedia PDF Downloads 221226 Implementation of an Online-Platform at the University of Freiburg to Help Medical Students Cope with Stress
Authors: Zoltán Höhling, Sarah-Lu Oberschelp, Niklas Gilsdorf, Michael Wirsching, Andrea Kuhnert
Abstract:
A majority of medical students at the University of Freiburg reported stress-related psychosomatic symptoms which are often associated with their studies. International research supports these findings, as medical students worldwide seem to be at special risk for mental health problems. In some countries and institutions, psychologically based interventions that assist medical students in coping with their stressors have been implemented. It turned out that anonymity is an important aspect here. Many students fear a potential damage of reputation when being associated with mental health problems, which may be due to a high level of competitiveness in classes. Therefore, we launched an online-platform where medical students could anonymously seek help and exchange their experiences with fellow students and experts. Medical students of all semesters have access to it through the university’s learning management system (called “ILIAS”). The informative part of the platform consists of exemplary videos showing medical students (actors) who act out scenes that demonstrate the antecedents of stress-related psychosomatic disorders. These videos are linked to different expert comments, describing the exhibited symptoms in an understandable and normalizing way. The (inter-)active part of the platform consists of self-help tools (such as meditation exercises or general tips for stress-coping) and an anonymous interactive forum where students can describe their stress-related problems and seek guidance from experts and/or share their experiences with fellow students. Besides creating an immediate proposal to help affected students, we expect that competitiveness between students might be diminished and bondage improved through mutual support between them. In the initial phase after the platform’s launch, it was accessed by a considerable number of medical students. On a closer look it appeared that platform sections like general information on psychosomatic-symptoms and self-treatment tools were accessed far more often than the online-forum during the first months after the platform launch. Although initial acceptance of the platform was relatively high, students showed a rather passive way of using our platform. While user statistics showed a clear demand for information on stress-related psychosomatic symptoms and its possible remedies, active engagement in the interactive online-forum was rare. We are currently advertising the platform intensively and trying to point out the assured anonymity of the platform and its interactive forum. Our plans, to assure students their anonymity through the use of an e-learning facility and promote active engagement in the online forum, did not (yet) turn out as expected. The reasons behind this may be manifold and based on either e-learning related issues or issues related to students’ individual needs. Students might, for example, question the assured anonymity due to a lack of trust in the technological functioning university’s learning management system. However, one may also conclude that reluctance to discuss stress-related psychosomatic symptoms with peer medical students may not be solely based on anonymity concerns, but could be rooted in more complex issues such as general mistrust between students.Keywords: e-tutoring, stress-coping, student support, online forum
Procedia PDF Downloads 385225 Relative Expression and Detection of MUB Adhesion Domains and Plantaricin-Like Bacteriocin among Probiotic Lactobacillus plantarum-Group Strains Isolated from Fermented Foods
Authors: Sundru Manjulata Devi, Prakash M. Halami
Abstract:
The immemorial use of fermented foods from vegetables, dairy and other biological sources are of great demand in India because of their health benefits. However, the diversity of Lactobacillus plantarum group (LPG) of vegetable origin has not been revealed yet, particularly with reference to their probiotic functionalities. In the present study, the different species of probiotic Lactobacillus plantarum group (LPG) i.e., L. plantarum subsp. plantarum MTCC 5422 (from fermented cereals), L. plantarum subsp. argentoratensis FG16 (from fermented bamboo shoot) and L. paraplantarum MTCC 9483 (from fermented gundruk) (as characterized by multiplex recA PCR assay) were considered to investigate their relative expression of MUB domains of mub gene (mucin binding protein) by Real time PCR. Initially, the allelic variation in the mub gene was assessed and found to encode three different variants (Type I, II and III). All the three types had 8, 9 and 10 MUB domains respectively (as analysed by Pfam database) and were found to be responsible for adhesion of bacteria to the host intestinal epithelial cells. These domains either get inserted or deleted during speciation or evolutionary events and lead to divergence. The reverse transcriptase qPCR analysis with mubLPF1+R1 primer pair supported variation in amplicon sizes with 300, 500 and 700 bp among different LPG strains. The relative expression of these MUB domains significantly unregulated in the presence of 1% mucin in overnight grown cultures. Simultaneously, the mub gene expressed efficiently by 7 fold in the culture L. paraplantarum MTCC 9483 with 10 MUB domains. An increase in the expression levels for L. plantarum subsp. plantarum MTCC 5422 and L. plantarum subsp. argentoratensis FG16 (MCC 2974) with 9 and 8 repetitive domains was around 4 and 2 fold, respectively. The detection and expression of an integrase (int) gene in the upstream region of mub gene reveals the excision and integration of these repetitive domains. Concurrently, an in vitro adhesion assay to mucin and exclusion of pathogens (such as Listeria monocytogenes and Micrococcus leuteus) was investigated and observed that the L. paraplantarum MTCC 9483 with more adhesion domains has more ability to adhere to mucin and inhibited the growth of pathogens. The production and expression of plantaricin-like bacteriocin (plnNC8 type) in MTCC 9483 suggests the pathogen inhibition. Hence, the expression of MUB domains can act as potential biomarkers in the screening of a novel probiotic LPG strain with adherence property. The present study provides a platform for an easy, rapid, less time consuming, low-cost methodology for the detection of potential probiotic bacteria. It was known that the traditional practices followed in the preparation of fermented bamboo shoots/gundruk/cereals of Indian foods contain different kinds of neutraceuticals for functional food and novel compounds with health promoting factors. In future, a detailed study of these food products can add more nutritive value, consumption and suitable for commercialization.Keywords: adhesion gene, fermented foods, MUB domains, probiotics
Procedia PDF Downloads 270224 India’s Neighborhood Policy and the Northeast: Exploratory Study of the Nagas in the Indo-Myanmar Border
Authors: Sachoiba Inkah
Abstract:
The Northeast region has not been a major factor in India’s foreign policy calculation since independence. Instead, the region was ignored and marginalized even to the extent of using force and repressive Acts such as AFSPA(Armed Forces Special Powers Act) to suppress the voices of both states and non-state actors. The liberalization of the economy in the 90s in the wake of globalization gave India a new outlook and the Look East Policy (LEP) was a paradigm shift in India’s engagement with the Southeast Asian nations as it seeks to explore the benefits of the ASEAN. The reorienting of India’s foreign policy to ‘Neighborhood First” is attributed to the present political dispensation, which is further widened to include ‘Extended Neighborhood.’ As a result, the Northeastern states have become key players in India’s participation in regional groupings such as SAARC, BIMSTEC, and BCIM. The need for external balancing, diplomacy and development has reset India’s foreign policy priorities as the Northeast states lie in the confluence of South Asia, Southeast and East Asia, and a stakeholder in Act East Policy. The paper will explore the role of Northeastern states in the framework of Indian foreign policy as it shares international boundaries with China, Bhutan, Bangladesh, and Myanmar and most importantly, study the case of Nagas who are spread across Manipur, Nagaland, and Arunachal Pradesh bordering Myanmar. The Indo-Myanmar border is an area of conflict and various illegal activities such as arms trafficking, illegal migrants, drug, and human trafficking are still being carried out and in order to address this issue, both India and Myanmar need to take into consideration the various communities living across the border. And conflict and insurgency should not be a yardstick to curtailed development of infrastructures such as roads, health facilities, transport, and communication in the contested region. The realities, perceptions, and contentions of the Northeastern states and the different communities living in the border areas need a wider discourse as the region the potential to drive India’s diplomatic relations with its neighbors and extended neighborhood. The methods employed are analytical and more of a descriptive analysis on India’s foreign policy framework with a focus on Nagas in Myanmar, drawing from both primary and secondary sources. Primary sources include official documents, data, and statistics released by various governmental agencies, parliamentary debates, political speeches, press releases, treaties and agreements, historical biographies and organizational policy papers, protocols and procedures of government conferences, regional organization study reports etc. The paper concludes that the recent proactive engagement between India and Myanmar on trade, defense, economic, and infrastructure development are positive signs cementing bilateral ties, but there is not much room for the people-to-people connect, especially for people living in the borderland. The Freedom of Movement Regime that is in place is limited and there is more scope for improvement as people in the borderland looks towards trade and commerce to not only uplift the border economy but also act as a catalyst for robust engagement between the two countries, albeit with more infrastructure such as road, healthcare, education, a tourist hotspot, trade centers, mobile connectivity, etc.Keywords: foreign policy, infrastructure development, insurgency, people to people connect
Procedia PDF Downloads 197223 The Stem Cell Transcription Co-factor Znf521 Sustains Mll-af9 Fusion Protein In Acute Myeloid Leukemias By Altering The Gene Expression Landscape
Authors: Emanuela Chiarella, Annamaria Aloisio, Nisticò Clelia, Maria Mesuraca
Abstract:
ZNF521 is a stem cell-associated transcription co-factor, that plays a crucial role in the homeostatic regulation of the stem cell compartment in the hematopoietic, osteo-adipogenic, and neural system. In normal hematopoiesis, primary human CD34+ hematopoietic stem cells display typically a high expression of ZNF521, while its mRNA levels rapidly decrease when these progenitors progress towards erythroid, granulocytic, or B-lymphoid differentiation. However, most acute myeloid leukemias (AMLs) and leukemia-initiating cells keep high ZNF521 expression. In particular, AMLs are often characterized by chromosomal translocations involving the Mixed Lineage Leukemia (MLL) gene, which MLL gene includes a variety of fusion oncogenes arisen from genes normally required during hematopoietic development; once they are fused, they promote epigenetic and transcription factor dysregulation. The chromosomal translocation t(9;11)(p21-22;q23), fusing the MLL gene with AF9 gene, results in a monocytic immune phenotype with an aggressive course, frequent relapses, and a short survival time. To better understand the dysfunctional transcriptional networks related to genetic aberrations, AML gene expression profile datasets were queried for ZNF521 expression and its correlations with specific gene rearrangements and mutations. The results showed that ZNF521 mRNA levels are associated with specific genetic aberrations: the highest expression levels were observed in AMLs involving t(11q23) MLL rearrangements in two distinct datasets (MILE and den Boer); elevated ZNF521 mRNA expression levels were also revealed in AMLs with t(7;12) or with internal rearrangements of chromosome 16. On the contrary, relatively low ZNF521 expression levels seemed to be associated with the t(8;21) translocation, that in turn is correlated with the AML1-ETO fusion gene or the t(15;17) translocation and in AMLs with FLT3-ITD, NPM1, or CEBPα double mutations. Invitro, we found that the enforced co-expression of ZNF521 in cord blood-derived CD34+ cells induced a significant proliferative advantage, improving MLL-AF9 effects on the induction of proliferation and the expansion of leukemic progenitor cells. Transcriptome profiling of CD34+ cells transduced with either MLL-AF9, ZNF521, or a combination of the two transgenes highlighted specific sets of up- or down-regulated genes that are involved in the leukemic phenotype, including those encoding transcription factors, epigenetic modulators, and cell cycle regulators as well as those engaged in the transport or uptake of nutrients. These data enhance the functional cooperation between ZNF521 and MA9, resulting in the development, maintenance, and clonal expansion of leukemic cells. Finally, silencing of ZNF521 in MLL-AF9-transformed primary CD34+ cells inhibited their proliferation and led to their extinction, as well as ZNF521 silencing in the MLL-AF9+ THP-1 cell line resulted in an impairment of their growth and clonogenicity. Taken together, our data highlight ZNF521 role in the control of self-renewal and in the immature compartment of malignant hematopoiesis, which, by altering the gene expression landscape, contributes to the development and/or maintenance of AML acting in concert with the MLL-AF9 fusion oncogene.Keywords: AML, human zinc finger protein 521 (hZNF521), mixed lineage leukemia gene (MLL) AF9 (MLLT3 or LTG9), cord blood-derived hematopoietic stem cells (CB-CD34+)
Procedia PDF Downloads 110222 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes
Authors: Iris Vural Gursel, Andrea Ramirez
Abstract:
Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.Keywords: biorefinery, economic assessment, lignin conversion, process design
Procedia PDF Downloads 261221 Evaluation Of A Start Up Business Strategy In Movie Industry: Case Study Of Visinema
Authors: Stacia E. H. Sitohang, S.Mn., Socrates Rudy Sirait
Abstract:
The first movie theater in Indonesia was established in December 1900. The movie industry started with international movie penetration. After a while, local movie producers started to rise and created local Indonesian movies. The industry is growing through ups and downs in Indonesia. In 2008, Visinema was founded in Jakarta, Indonesia, by AnggaDwimasSasongko, one of the most respected movie director in Indonesia. After getting achievements and recognition, Visinema chose to grow the company horizontally as opposed to only grow vertically and gain another similar achievement. Visinemachose to build the ecosystem that enables them to obtain many more opportunities and generatebusiness sustainability. The company proceed as an agile company. They created several business subsidiaries to support the company’s Intellectual Property (IP) development. This research was done through interview with the key persons in the company and questionnaire to get market insights regarding Visinema. The is able to transform their IP that initially started from movies to different kinds of business model. Interestingly, Angga chose to use the start up approach to create Visinema. In 2019, the company successfully gained Series A funding from Intudo Ventures and got other various investment schemes to support the business. In early 2020, Covid-19 pandemic negatively impacted many industries in Indonesia, especially the entertainment and leisure businesses. Fortunately, Visinema did not face any significant problem regarding survival during the pandemic, there were nolay-offs nor work hour reductions. Instead, they were thinking of much bigger opportunities and problems. While other companies suffer during the pandemic, Visinema created the first focused Transactional Video On Demand (TVOD) in Indonesia named Bioskop Online. The platform was created to keep the company innovating and adapting with the new online market as the result of the Covid-19 pandemic. Other than a digital platform, Visinemainvested heavily in animation to target kids and family business. They believed that penetrating the technology and animation market is going to be the biggest opportunity in Visinema’s road map. Besides huge opportunities, Visinema is also facing problems. The first is company brand positioning. Angga, as the founder, felt the need to detach his name from the brand image of Visinema to create system sustainability and scalability. Second, the company has to create a strategy to refocus in a particular business area to maintain and improve the competitive advantages. The third problem, IP piracy is a huge structural problem in Indonesia, the company considers IP thieves as their biggest competitors as opposed to other production company. As the recommendation, we suggest a set of branding and management strategy to detach the founder’s name from Visinema’s brand and improve the competitive advantages. We also suggest Visinema invest in system building to prevent IP piracy in the entertainment industry, which later can be another business subsidiary of Visinema.Keywords: business ecosystem, agile, sustainability, scalability, start Up, intellectual property, digital platform
Procedia PDF Downloads 137220 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents
Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin
Abstract:
The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.Keywords: antimicrobial agent, chitosan, green production, nanoparticles
Procedia PDF Downloads 326219 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 116218 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst
Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci
Abstract:
The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel
Procedia PDF Downloads 154217 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 156216 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation
Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis
Abstract:
The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement
Procedia PDF Downloads 421215 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 117214 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis
Authors: Bangkit Adhi Nugraha, Bambang Suripto
Abstract:
On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016
Procedia PDF Downloads 379213 Combating the Practice of Open Defecation through Appropriate Communication Strategies in Rural India
Authors: Santiagomani Alex Parimalam
Abstract:
Lack of awareness on the consequences of open defecation and myths and misconceptions related to use of toilets have led to the continued practice of open defecation in India. Government of India initiated a multi-pronged intensive communication campaign against the practice of open defecation in the last few years. The primary vision of this communication campaign was to provide increased demand for toilets and to ensure that all have access to safe sanitation. The campaign strategy included the use of mass media, group and folk media, and interpersonal communication to expedite achieving its objectives. The campaign included the use of various media such as posters, wall writings, slides in cinema theatres, kiosks, pamphlets, newsletters, flip charts and folk media to bring behavioural changes in the communities. The author did a concurrent monitoring and process documentation of the campaigns initiated by the state of Tamilnandu, India between 2013 and 2016 commissioned by UNICEF India. The study was carried out to assess the effectiveness of the communication campaigns in combating the practice of open defecation and promote construction of toilets in the state of Tamilnadu, India. Initial findings revealed the gap in understanding the audience and the use of appropriate media. The first phase of the communication campaign by name as Chi Chi Chollapa (bringing shame concept) also revealed that use of interpersonal communication, group and community media were the most effective strategy in reaching the rural masses. The failure of various other media used especially the print media (poster, handbills, newsletter, kiosks) provides insights as to where the government needs to invest its resources in bringing health-seeking behaviour in the community. The findings shared with the government enabled to strengthen the campaign resulting in improved response. Taking cues from the study, the government understood the potency of the women, school children, youth and community leaders as the effective carriers of the message. The government narrowed down its focus and invested on the voluntary workers (village poverty reduction committee workers VPRCs) in the community. The effectiveness of interpersonal communication and peer education by the credible community worker threw light on the need for localising the content and communicator. From this study, we could derive that only community and group media are preferred by the people in the rural community. Children, youth, women, and credible local leaders are proved to be ambassadors in behaviour change communication. This study discloses the lacunae involved in the communication campaign and points out that the state should have carried out a proper communication need analysis and piloting. The study used a survey method with random sampling. The study used both quantitative and qualitative tools such as interview schedules, in-depth interviews, and focus group discussions in rural areas of Tamilnadu in phases. The findings of the study would provide directions to future campaigns to any campaign concerning health and rural development.Keywords: appropriate, communication, combating, open defecation
Procedia PDF Downloads 126212 Evaluation of Bagh Printing Motifs and Processes of Madhya Pradesh: From Past to Contemporary
Authors: Kaveri Dutta, Ratna Sharma
Abstract:
Indian traditional textile is a synthesis of various cultures. Art and crafts of a country showcases the rich cultural and artistic history of that nation. Prehistorically Indian handicrafts were basically made for day to day use; the yearning for aesthetic application soon saw the development of flooding designs and motifs. Similarly, Bagh print a traditional hand block Print with natural colours an Indian handicraft practiced in Bagh, Madhya Pradesh(India). Bagh print has its roots in Sindh, which is now a part of Pakistan. The present form of Bagh printing actually started in 1962 when the craftsmen migrated from Manavar to the neighboring town of Bagh situated in Madhya Pradesh and hence Bagh has always been associated with this printing style. Bagh printing basically involved blocks that are carved onto motifs that represent flora such as Jasmine, Mushroom leheriya and so on. There are some prints that were inspired by the jaali work that embellished the Taj Mahal and various other forts. Inspiration is also drawn from the landscapes and geometrical figures. The motifs evoke various moods in the serenity of the prints and that is the catchy element of Bagh prints. The development in this traditional textile is as essential as in another field. Nowadays fashion trends are fragile and innovative changes over existing fashion field in the short span is the demand of times. We must make efforts to preserve this cultural heritage of arts and crafts and this is done either by documenting the various ancient traditions or by making a blend of it. Since this craft is well known over the world, but the need is to document the original motif, fabric, technology and colors used in contemporary fashion. Hence keeping above points in mind this study on bagh print textiles of Madhya Pradesh work has been formulated. The information incorporated in the paper was based on secondary data taken from relevant books, journals, museum visit and articles. Besides for the demographic details and working profile of the artisans dealt with printing, an interview schedule was carried out in three regions of Madhya Pradesh. This work of art was expressed in Cotton fabric. For this study selected traditional motifs for Bang printing was used. Some of the popular traditional Bagh motifs are Jasmine, Mushroom leheriya, geometrical figures and jaali work. The Bagh printed cotton fabrics were developed into a range of men’s ethic wear in combination with embroideries from Rajasthan. Products developed were bandhgala jackets, kurtas, serwani and dupattas. From the present study, it can be observed that the embellished traditional Bang printed range of ethnic men’s wear resulted in the fresh and colourful pattern. The embroidered Bagh printed cotton fabric also created a huge change in a positive way among artisans of the three regions.Keywords: art and craft of Madhya Pradesh, evolution of printing in India, history of Bagh printing, sources of inspiration
Procedia PDF Downloads 353211 Waste Scavenging as a Waste-to-Wealth Strategy for Waste Reduction in Port Harcourt City Nigeria: A Mixed Method Study
Authors: Osungwu Emeka
Abstract:
Until recently, Port Harcourt was known as the “Garden City of Nigeria” because of its neatness and the overwhelming presence of vegetation all over the metropolis. But today, the presence of piles of refuse dotting the entire city may have turned Port Harcourt into a “Garbage City”. Indiscriminate dumping of industrial, commercial and household wastes such as food waste, paper, polythene, textiles, scrap metals, glasses, wood, plastic, etc. at street corners and gutters, is still very common. The waste management problem in the state affects the citizens both directly and indirectly. The dumping of waste along the roadside obstructs traffic and, after mixing with rain water may sip underground with the possibility of the leachate contaminating the groundwater. The basic solid waste management processes of collection, transportation, segregation and final disposal appear to be very inefficient. This study was undertaken to assess waste utilization using metal waste scavengers. Highlighting their activities as a part of the informal sector of the solid waste management system with a view to identifying their challenges, prospects and possible contributions to the solid waste management system in the Port Harcourt metropolis. Therefore, the aim was to understand and assess scavenging as a system of solid waste management in Port Harcourt and to identify the main bottlenecks to its efficiency and the way forward. This study targeted people who engage in scavenging metal scraps across 5 major waste dump sites across Port Harcourt. To achieve this, a mixed method study was conducted to provide both experiential evidence on this waste utilization method using a qualitative study and a survey to collect numeric evidence on this subject. The findings from the qualitative string of this study provided insight on scavenging as a waste utilization activity and how their activities can reduce the gross waste generated and collected from the subject areas. It further showed the nature and characteristics of scavengers in the waste recycling system as a means of achieving the millennium development goals towards poverty alleviation, job creation and the development of a sustainable, cleaner environment. The study showed that in Port Harcourt, the waste management practice involves the collection, transportation and disposal of waste by refuse contractors using cart pushers and disposal vehicles at designated dumpsites where the scavengers salvage metal scraps for recycling and reuse. This study further indicates that there is a great demand for metal waste materials/products that are clearly identified as genuinely sustainable, even though they may be perceived as waste. The market for these waste materials shall promote entrepreneurship as a profitable venture for waste recovery and recycling in Port Harcourt. Therefore, the benefit of resource recovery and recycling as a means of the solid waste management system will enhance waste to wealth that will reduce pollution, create job opportunities thereby alleviate poverty.Keywords: scavengers, metal waste, waste-to-wealth, recycle, Port Harcourt, Nigeria, waste reduction, garden city, waste
Procedia PDF Downloads 98210 China Pakistan Economic Corridor: An Unfolding Fiasco in World Economy
Authors: Debarpita Pande
Abstract:
On 22nd May 2013 Chinese Premier Li Keqiang on his visit to Pakistan tabled a proposal for connecting Kashgar in China’s Xinjiang Uygur Autonomous Region with the south-western Pakistani seaport of Gwadar via the China Pakistan Economic Corridor (hereinafter referred to as CPEC). The project, popularly termed as 'One Belt One Road' will encompass within it a connectivity component including a 3000-kilometre road, railways and oil pipeline from Kashgar to Gwadar port along with an international airport and a deep sea port. Superficially, this may look like a 'game changer' for Pakistan and other countries of South Asia but this article by doctrinal method of research will unearth some serious flaws in it, which may change the entire economic system of this region heavily affecting the socio-economic conditions of South Asia, further complicating the complete geopolitical situation of the region disturbing the world economic stability. The paper besets with a logical analyzation of the socio-economic issues arising out of this project with an emphasis on its impact on the Pakistani and Indian economy due to Chinese dominance, serious tension in international relations, security issues, arms race, political and provincial concerns. The research paper further aims to study the impact of huge burden of loan given by China towards this project where Pakistan already suffers from persistent debts in the face of declining foreign currency reserves along with that the sovereignty of Pakistan will also be at stake as the entire economy of the country will be held hostage by China. The author compares this situation with the fallout from projects in Sri Lanka, Tajikistan, and several countries of Africa, all of which are now facing huge debt risks brought by Chinese investments. The entire economic balance will be muddled by the increment in Pakistan’s demand of raw materials resulting to the import of the same from China, which will lead to exorbitant price-hike and limited availability. CPEC will also create Chinese dominance over the international movement of goods that will take place between the Atlantic and the Pacific oceans and hence jeopardising the entire economic balance of South Asia along with Middle Eastern countries like Dubai. Moreover, the paper also analyses the impact of CPEC in the context of international unrest and arms race between Pakistan and India as well as India and China due to border disputes and Chinese surveillance. The paper also examines the global change in economic dynamics in international trade that CPEC will create in the light of U.S.-China relationship. The article thus reflects the grave consequences of CPEC on the international economy, security and bilateral relations, which surpasses the positive impacts of it. The author lastly suggests for more transparency and proper diplomatic planning in the execution of this mega project, which can be a cause of economic complexity in international trade in near future.Keywords: China, CPEC, international trade, Pakistan
Procedia PDF Downloads 174209 Disaster Management Approach for Planning an Early Response to Earthquakes in Urban Areas
Authors: Luis Reynaldo Mota-Santiago, Angélica Lozano
Abstract:
Determining appropriate measures to face earthquakesarea challenge for practitioners. In the literature, some analyses consider disaster scenarios, disregarding some important field characteristics. Sometimes, software that allows estimating the number of victims and infrastructure damages is used. Other times historical information of previous events is used, or the scenarios’informationis assumed to be available even if it isnot usual in practice. Humanitarian operations start immediately after an earthquake strikes, and the first hours in relief efforts are important; local efforts are critical to assess the situation and deliver relief supplies to the victims. A preparation action is prepositioning stockpiles, most of them at central warehouses placed away from damage-prone areas, which requires large size facilities and budget. Usually, decisions in the first 12 hours (standard relief time (SRT)) after the disaster are the location of temporary depots and the design of distribution paths. The motivation for this research was the delay in the reaction time of the early relief efforts generating the late arrival of aid to some areas after the Mexico City 7.1 magnitude earthquake in 2017. Hence, a preparation approach for planning the immediate response to earthquake disasters is proposed, intended for local governments, considering their capabilities for planning and for responding during the SRT, in order to reduce the start-up time of immediate response operations in urban areas. The first steps are the generation and analysis of disaster scenarios, which allow estimatethe relief demand before and in the early hours after an earthquake. The scenarios can be based on historical data and/or the seismic hazard analysis of an Atlas of Natural Hazards and Risk as a way to address the limited or null available information.The following steps include the decision processes for: a) locating local depots (places to prepositioning stockpiles)and aid-giving facilities at closer places as possible to risk areas; and b) designing the vehicle paths for aid distribution (from local depots to the aid-giving facilities), which can be used at the beginning of the response actions. This approach allows speeding up the delivery of aid in the early moments of the emergency, which could reduce the suffering of the victims allowing additional time to integrate a broader and more streamlined response (according to new information)from national and international organizations into these efforts. The proposed approachis applied to two case studies in Mexico City. These areas were affectedby the 2017’s earthquake, having limited aid response. The approach generates disaster scenarios in an easy way and plans a faster early response with a short quantity of stockpiles which can be managed in the early hours of the emergency by local governments. Considering long-term storage, the estimated quantities of stockpiles require a limited budget to maintain and a small storage space. These stockpiles are useful also to address a different kind of emergencies in the area.Keywords: disaster logistics, early response, generation of disaster scenarios, preparation phase
Procedia PDF Downloads 110208 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students
Authors: Lily Ranjbar, Haori Yang
Abstract:
Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education
Procedia PDF Downloads 90207 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon
Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim
Abstract:
There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors
Procedia PDF Downloads 243206 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites
Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby
Abstract:
In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage
Procedia PDF Downloads 203205 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 285204 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 145203 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 133202 An Unusual Manifestation of Spirituality: Kamppi Chapel of Helsinki
Authors: Emine Umran Topcu
Abstract:
In both urban design and architecture, the primary goal is considered to be looking for ways in which people feel and think about space and place. Humans, in general, see a place as security and space as freedom and feel attached to place and long for space. Contemporary urban design manifests itself by addressing basic physical and psychological human needs. Not much attention is paid to transcendence. There seems to be a gap in the hierarchy of human needs. Usually, social aspects of public space are addressed through urban design. More personal and intimately scaled needs of an individual are neglected. How does built form contribute to an individual’s growth, contemplation, and exploration? In other words, a greater meaning in the immediate environment. Architects love to talk about meaning, poetics, attachment and other ethereal aspects of space that are not visible attributes of places. This paper aims at describing spirituality through built form with a personal experience of Kamppi Chapel of Helsinki. Experience covers various modes through which a person unfolds or constructs reality. Perception, sensation, emotion, and thought can be counted as for these modes. To experience is to get to know. What can be known is a construct of experience. Feelings and thoughts about space and place are very complex in human beings. They grow out of life experiences. The author had the chance of visiting Kamppi Chapel in April 2017, out of which the experience grew. The Kamppi Chapel is located on the South side of the busy Narinnka Square in central Helsinki. It offers a place to quiet down and compose oneself in a most lively urban space. With its curved wooden facade, the small building looks more like a museum than a chapel. It can be called a museum for contemplation. With its gently shaped interior, it embraces visitors and shields them from the hustle bustle of the city outside. Places of worship in all faiths signify sacred power. The author, having origins in a part of the world where domes and minarets dominate the cityscape, was impressed by the size and the architectural visibility of the Chapel. Anyone born and trained in such a tradition shares the inherent values and psychological mechanisms of spirituality, sacredness and the modest realities of their environment. Spirituality in all cultural traditions has not been analyzed and reinterpreted in new conceptual frameworks. Fundamentalists may reject this positivist attitude, but Kamppi Chapel as it stands does not look like it has a say like “I’m a model to be followed”. It just faces the task of representing a religious facility in an urban setting largely shaped by modern urban planning, which seems to the author as looking for a new definition of individual status. The quest between the established and the new is the demand for modern efficiency versus dogmatic rigidity. The architecture here has played a very promising and rewarding role for spirituality. The designers have been the translators for human desire for better life and aesthetic environment for an optimal satisfaction of local citizens and the visitors alike.Keywords: architecture, Kamppi Chapel, spirituality, urban
Procedia PDF Downloads 182