Search results for: Urban network
77 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 44276 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 8975 Governance of Climate Adaptation Through Artificial Glacier Technology: Lessons Learnt from Leh (Ladakh, India) In North-West Himalaya
Authors: Ishita Singh
Abstract:
Social-dimension of Climate Change is no longer peripheral to Science, Technology and Innovation (STI). Indeed, STI is being mobilized to address small farmers’ vulnerability and adaptation to Climate Change. The experiences from the cold desert of Leh (Ladakh) in North-West Himalaya illustrate the potential of STI to address the challenges of Climate Change and the needs of small farmers through the use of Artificial Glacier Techniques. Small farmers have a unique technique of water harvesting to augment irrigation, called “Artificial Glaciers” - an intricate network of water channels and dams along the upper slope of a valley that are located closer to villages and at lower altitudes than natural glaciers. It starts to melt much earlier and supplements additional irrigation to small farmers’ improving their livelihoods. Therefore, the issue of vulnerability, adaptive capacity and adaptation strategy needs to be analyzed in a local context and the communities as well as regions where people live. Leh (Ladakh) in North-West Himalaya provides a Case Study for exploring the ways in which adaptation to Climate Change is taking place at a community scale using Artificial Glacier Technology. With the above backdrop, an attempt has been made to analyze the rural poor households' vulnerability and adaptation practices to Climate Change using this technology, thereby drawing lessons on vulnerability-livelihood interactions in the cold desert of Leh (Ladakh) in North-West Himalaya, India. The study is based on primary data and information collected from 675 households confined to 27 villages of Leh (Ladakh) in North-West Himalaya, India. It reveals that 61.18% of the population is driving livelihoods from agriculture and allied activities. With increased irrigation potential due to the use of Artificial Glaciers, food security has been assured to 77.56% of households and health vulnerability has been reduced in 31% of households. Seasonal migration as a livelihood diversification mechanism has declined in nearly two-thirds of households, thereby improving livelihood strategies. Use of tactical adaptations by small farmers in response to persistent droughts, such as selling livestock, expanding agriculture lands, and use of relief cash and foods, have declined to 20.44%, 24.74% and 63% of households. However, these measures are unsustainable on a long-term basis. The role of policymakers and societal stakeholders becomes important in this context. To address livelihood challenges, the role of technology is critical in a multidisciplinary approach involving multilateral collaboration among different stakeholders. The presence of social entrepreneurs and new actors on the adaptation scene is necessary to bring forth adaptation measures. Better linkage between Science and Technology policies, together with other policies, should be encouraged. Better health care, access to safe drinking water, better sanitary conditions, and improved standards of education and infrastructure are effective measures to enhance a community’s adaptive capacity. However, social transfers for supporting climate adaptive capacity require significant amounts of additional investment. Developing institutional mechanisms for specific adaptation interventions can be one of the most effective ways of implementing a plan to enhance adaptation and build resilience.Keywords: climate change, adaptation, livelihood, stakeholders
Procedia PDF Downloads 7074 We Are the Earth That Defends Itself: An Exploration of Discursive Practices of Les Soulèvements De La Terre
Authors: Sophie Del Fa, Loup Ducol
Abstract:
This presentation will focus on the discursive practices of Les Soulèvements de la Terre (hereafter SdlT), a French environmentalist group mobilized against agribusiness. More specifically, we will use, as a case study, the violently repressed demonstration that took place in Sainte-Soline on March 25, 2023 (see after for details). The SdlT embodies the renewal of anti-capitalist and environmentalist struggles that began with Occupy Wall Street in 2009 and in France with the Nuit debout in 2016 and the yellow vests movement from 2019 to 2020. These struggles have three things in common: they are self-organized without official leaders, they rely mainly on occupations to reappropriate public places (squares, roundabouts, natural territories) and they are anti-capitalist. The SdlT was created in 2021 by activists coming from the Zone-to-Defend of Notre-Dame-des-Landes, a victorious 10 yearlong occupation movement against an airport near Nantes, France (from 2009 to 2018). The SdlT is not labeled as a formal association, nor as a constituted group, but as an anti-capitalist network of local struggles at the crossroads of ecology and social issues. Indeed, although they target agro-industry, land grabbing, soil artificialization and ecology without transition, the SdlT considers ecological and social questions as interdependent. Moreover, they have an encompassing vision of ecology that they consider as a concern for the living as a whole by erasing the division between Nature and Culture. Their radicality is structured around three main elements: federative and decentralized dimensions, the rhetoric of living alliances and militant creatives strategies. The objective of this reflexion is to understand how these three dimensions are articulated through the SdlT’s discursive practices. To explore these elements, we take as a case study one specific event: the demonstration against the ‘basins’ held in Sainte-Soline on March 25, 2023, on the construction site of new water storage infrastructure for agricultural irrigation in western France. This event represents a turning point for the SdlT. Indeed, the protest was violently repressed: 5000 grenades were fired by the police, hundreds of people were injured, and one person was still in a coma at the time of writing these lines. Moreover, following Saint-Soline’s events, the Minister of Interior Affairs, Gérald Darmin, threatened to dissolve the SdlT, thus adding fuel to the fire in an already tense social climate (with the ongoing strikes against the pensions reform). We anchor our reflexion on three types of data: 1) our own experiences (inspired by ethnography) of the Sainte-Soline demonstration; 2) the collection of more than 500 000 Tweets with the #SainteSoline hashtag and 3) a press review of texts and articles published after Sainte-Soline’s demonstration. The exploration of these data from a turning point in the history of the SdlT will allow us to analyze how the three dimensions highlighted earlier (federative and decentralized dimensions, rhetoric of living alliances and creatives militant strategies) are materialized through the discursive practices surrounding the Sainte-Soline event. This will allow us to shed light on how a new contemporary movement implements contemporary environmental struggles.Keywords: discursive practices, Sainte-Soline, Ecology, radical ecology
Procedia PDF Downloads 7173 Significant Aspects and Drivers of Germany and Australia's Energy Policy from a Political Economy Perspective
Authors: Sarah Niklas, Lynne Chester, Mark Diesendorf
Abstract:
Geopolitical tensions, climate change and recent movements favouring a transformative shift in institutional power structures have influenced the economics of conventional energy supply for decades. This study takes a multi-dimensional approach to illustrate the potential of renewable energy (RE) technology to provide a pathway to a low-carbon economy driven by ecologically sustainable, independent and socially just energy. This comparative analysis identifies economic, political and social drivers that shaped the adoption of RE policy in two significantly different economies, Germany and Australia, with strong and weak commitments to RE respectively. Two complementary political-economy theories frame the document-based analysis. Régulation Theory, inspired by Marxist ideas and strongly influenced by contemporary economic problems, provides the background to explore the social relationships contributing the adoption of RE within the macro-economy. Varieties of Capitalism theory, a more recently developed micro-economic approach, examines the nature of state-firm relationships. Together these approaches provide a comprehensive lens of analysis. Germany’s energy policy transformed substantially over the second half of the last century. The development is characterised by the coordination of societal, environmental and industrial demands throughout the advancement of capitalist regimes. In the Fordist regime, mass production based on coal drove Germany’s astounding economic recovery during the post-war period. Economic depression and the instability of institutional arrangements necessitated the impulsive seeking of national security and energy independence. During the postwar Flexi-Fordist period, quality-based production, innovation and technology-based competition schemes, particularly with regard to political power structures in and across Europe, favoured the adoption of RE. Innovation, knowledge and education were institutionalized, leading to the legislation of environmental concerns. Lastly the establishment of government-industry-based coordinative programs supported the phase out of nuclear power and the increased adoption of RE during the last decade. Australia’s energy policy is shaped by the country’s richness in mineral resources. Energy policy largely served coal mining, historically and currently one of the most capital-intense industry. Assisted by the macro-economic dimensions of institutional arrangements, social and financial capital is orientated towards the export-led and strongly demand-oriented economy. Here energy policy serves the maintenance of capital accumulation in the mining sector and the emerging Asian economies. The adoption of supportive renewable energy policy would challenge the distinct role of the mining industry within the (neo)-liberal market economy. The state’s protective role of the mining sector has resulted in weak commitment to RE policy and investment uncertainty in the energy sector. Recent developments, driven by strong public support for RE, emphasize the sense of community in urban and rural areas and the emergence of a bottom-up approach to adopt renewables. Thus, political economy frameworks on both the macro-economic (Regulation Theory) and micro-economic (Varieties of Capitalism theory) scales can together explain the strong commitment to RE in Germany vis-à-vis the weak commitment in Australia.Keywords: political economy, regulation theory, renewable energy, social relationships, energy transitions
Procedia PDF Downloads 38172 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 15971 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria
Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde
Abstract:
An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component
Procedia PDF Downloads 11770 A Case Study on How Biomedical Engineering (BME) Outreach Programmes Serve as An Alternative Educational Approach to Form and Develop the BME Community in Hong Kong
Authors: Sum Lau, Wing Chung Cleo Lau, Wing Yan Chu, Long Ching Ip, Wan Yin Lo, Jo Long Sam Yau, Ka Ho Hui, Sze Yi Mak
Abstract:
Biomedical engineering (BME) is an interdisciplinary subject where knowledge about biology and medicine is applied to novel applications, solving clinical problems. This subject is crucial for cities such as Hong Kong, where the burden on the medical system is rising due to reasons like the ageing population. Hong Kong, who is actively boosting technological advancements in recent years, sets BME, or biotechnology, as a major category, as reflected in the 2018-19 Budget, where biotechnology was one of the four pillars for development. Over the years, while resources in terms of money and space have been provided, there has been a lack of talents expressed by both the academia and industry. While exogenous factors, such as COVID, may have hindered talents from outside Hong Kong to come, endogenous factors should also be considered. In particular, since there are already a few local universities offering BME programmes, their curriculum or style of education requires to be reviewed to intensify the network of the BME community and support post-academic career development. It was observed that while undergraduate (UG) studies focus on knowledge teaching with some technical training and postgraduate (PG) programmes concentrate on upstream research, the programmes are generally confined to the academic sector and lack connections to the industry. In light of that, a “Biomedical Innovation and Outreach Programme 2022” (“B.I.O.2022”) was held to connect students and professors from academia with clinicians and engineers from the industry, serving as a comparative approach to conventional education methods (UG and PG programmes from tertiary institutions). Over 100 participants, including undergraduates, postgraduates, secondary school students, researchers, engineers, and clinicians, took part in various outreach events such as conference and site visits, all held from June to July 2022. As a case study, this programme aimed to tackle the aforementioned problems with the theme of “4Cs” (connection, communication, collaboration, and commercialisation). The effectiveness of the programme is investigated by its ability to serve as an adult and continuing education and the effectiveness of causing social change to tackle current societal challenges, with the focus on tackling the lack of talents engaging in biomedical engineering. In this study, B.I.O.2022 is found to be able to complement the traditional educational methods, particularly in terms of knowledge exchange between the academia and the industry. With enhanced communications between participants from different career stages, there were students who followed up to visit or even work with the professionals after the programme. Furthermore, connections between the academia and industry could foster the generation of new knowledge, which ultimately pointed to commercialisation, adding value to the BME industry while filling the gap in terms of human resources. With the continuation of events like B.I.O.2022, it provides a promising starting point for the development and relationship strengthening of a BME community in Hong Kong, and shows potential as an alternative way of adult education or learning with societal benefits.Keywords: biomedical engineering, adult education for social change, comparative methods and principles, lifelong learning, faced problems, promises, challenges and pitfalls
Procedia PDF Downloads 11669 LncRNA-miRNA-mRNA Networks Associated with BCR-ABL T315I Mutation in Chronic Myeloid Leukemia
Authors: Adenike Adesanya, Nonthaphat Wong, Xiang-Yun Lan, Shea Ping Yip, Chien-Ling Huang
Abstract:
Background: The most challenging mutation of the oncokinase BCR-ABL protein T315I, which is commonly known as the “gatekeeper” mutation and is notorious for its strong resistance to almost all tyrosine kinase inhibitors (TKIs), especially imatinib. Therefore, this study aims to identify T315I-dependent downstream microRNA (miRNA) pathways associated with drug resistance in chronic myeloid leukemia (CML) for prognostic and therapeutic purposes. Methods: T315I-carrying K562 cell clones (K562-T315I) were generated by the CRISPR-Cas9 system. Imatinib-treated K562-T315I cells were subjected to small RNA library preparation and next-generation sequencing. Putative lncRNA-miRNA-mRNA networks were analyzed with (i) DESeq2 to extract differentially expressed miRNAs, using Padj value of 0.05 as cut-off, (ii) STarMir to obtain potential miRNA response element (MRE) binding sites of selected miRNAs on lncRNA H19, (iii) miRDB, miRTarbase, and TargetScan to predict mRNA targets of selected miRNAs, (iv) IntaRNA to obtain putative interactions between H19 and the predicted mRNAs, (v) Cytoscape to visualize putative networks, and (vi) several pathway analysis platforms – Enrichr, PANTHER and ShinyGO for pathway enrichment analysis. Moreover, mitochondria isolation and transcript quantification were adopted to determine the new mechanism involved in T315I-mediated resistance of CML treatment. Results: Verification of the CRISPR-mediated mutagenesis with digital droplet PCR detected the mutation abundance of ≥80%. Further validation showed the viability of ≥90% by cell viability assay, and intense phosphorylated CRKL protein band being detected with no observable change for BCR-ABL and c-ABL protein expressions by Western blot. As reported by several investigations into hematological malignancies, we determined a 7-fold increase of H19 expression in K562-T315I cells. After imatinib treatment, a 9-fold increment was observed. DESeq2 revealed 171 miRNAs were differentially expressed K562-T315I, 112 out of these miRNAs were identified to have MRE binding regions on H19, and 26 out of the 112 miRNAs were significantly downregulated. Adopting the seed-sequence analysis of these identified miRNAs, we obtained 167 mRNAs. 6 hub miRNAs (hsa-let-7b-5p, hsa-let-7e-5p, hsa-miR-125a-5p, hsa-miR-129-5p, and hsa-miR-372-3p) and 25 predicted genes were identified after constructing hub miRNA-target gene network. These targets demonstrated putative interactions with H19 lncRNA and were mostly enriched in pathways related to cell proliferation, senescence, gene silencing, and pluripotency of stem cells. Further experimental findings have also shown the up-regulation of mitochondrial transcript and lncRNA MALAT1 contributing to the lncRNA-miRNA-mRNA networks induced by BCR-ABL T315I mutation. Conclusions: Our results have indicated that lncRNA-miRNA regulators play a crucial role not only in leukemogenesis but also in drug resistance, considering the significant dysregulation and interactions in the K562-T315I cell model generated by CRISPR-Cas9. In silico analysis has further shown that lncRNAs H19 and MALAT1 bear several complementary miRNA sites. This implies that they could serve as a sponge, hence sequestering the activity of the target miRNAs.Keywords: chronic myeloid leukemia, imatinib resistance, lncRNA-miRNA-mRNA, T315I mutation
Procedia PDF Downloads 15968 Book Exchange System with a Hybrid Recommendation Engine
Authors: Nilki Upathissa, Torin Wirasinghe
Abstract:
This solution addresses the challenges faced by traditional bookstores and the limitations of digital media, striking a balance between the tactile experience of printed books and the convenience of modern technology. The book exchange system offers a sustainable alternative, empowering users to access a diverse range of books while promoting community engagement. The user-friendly interfaces incorporated into the book exchange system ensure a seamless and enjoyable experience for users. Intuitive features for book management, search, and messaging facilitate effortless exchanges and interactions between users. By streamlining the process, the system encourages readers to explore new books aligned with their interests, enhancing the overall reading experience. Central to the system's success is the hybrid recommendation engine, which leverages advanced technologies such as Long Short-Term Memory (LSTM) models. By analyzing user input, the engine accurately predicts genre preferences, enabling personalized book recommendations. The hybrid approach integrates multiple technologies, including user interfaces, machine learning models, and recommendation algorithms, to ensure the accuracy and diversity of the recommendations. The evaluation of the book exchange system with the hybrid recommendation engine demonstrated exceptional performance across key metrics. The high accuracy score of 0.97 highlights the system's ability to provide relevant recommendations, enhancing users' chances of discovering books that resonate with their interests. The commendable precision, recall, and F1score scores further validate the system's efficacy in offering appropriate book suggestions. Additionally, the curve classifications substantiate the system's effectiveness in distinguishing positive and negative recommendations. This metric provides confidence in the system's ability to navigate the vast landscape of book choices and deliver recommendations that align with users' preferences. Furthermore, the implementation of this book exchange system with a hybrid recommendation engine has the potential to revolutionize the way readers interact with printed books. By facilitating book exchanges and providing personalized recommendations, the system encourages a sense of community and exploration within the reading community. Moreover, the emphasis on sustainability aligns with the growing global consciousness towards eco-friendly practices. With its robust technical approach and promising evaluation results, this solution paves the way for a more inclusive, accessible, and enjoyable reading experience for book lovers worldwide. In conclusion, the developed book exchange system with a hybrid recommendation engine represents a progressive solution to the challenges faced by traditional bookstores and the limitations of digital media. By promoting sustainability, widening access to printed books, and fostering engagement with reading, this system addresses the evolving needs of book enthusiasts. The integration of user-friendly interfaces, advanced machine learning models, and recommendation algorithms ensure accurate and diverse book recommendations, enriching the reading experience for users.Keywords: recommendation systems, hybrid recommendation systems, machine learning, data science, long short-term memory, recurrent neural network
Procedia PDF Downloads 9467 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering
Authors: Olia Kanevskaia
Abstract:
Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.Keywords: private order, standardization, standard-setting organizations, transnational law
Procedia PDF Downloads 16366 Anti-Infective Potential of Selected Philippine Medicinal Plant Extracts against Multidrug-Resistant Bacteria
Authors: Demetrio L. Valle Jr., Juliana Janet M. Puzon, Windell L. Rivera
Abstract:
From the various medicinal plants available in the Philippines, crude ethanol extracts of twelve (12) Philippine medicinal plants, namely: Senna alata L. Roxb. (akapulko), Psidium guajava L. (bayabas), Piper betle L. (ikmo), Vitex negundo L. (lagundi), Mitrephora lanotan (Blanco) Merr. (Lanotan), Zingiber officinale Roscoe (luya), Curcuma longa L. (Luyang dilaw), Tinospora rumphii Boerl (Makabuhay), Moringga oleifera Lam. (malunggay), Phyllanthus niruri L. (sampa-sampalukan), Centella asiatica (L.) Urban (takip kuhol), and Carmona retusa (Vahl) Masam (tsaang gubat) were studied. In vitro methods of evaluation against selected Gram-positive and Gram-negative multidrug-resistant (MDR), bacteria were performed on the plant extracts. Although five of the plants showed varying antagonistic activities against the test organisms, only Piper betle L. exhibited significant activities against both Gram-negative and Gram-positive multidrug-resistant bacteria, exhibiting wide zones of growth inhibition in the disk diffusion assay, and with the lowest concentrations of the extract required to inhibit the growth of the bacteria, as supported by the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) assays. Further antibacterial studies of the Piper betle L. leaf, obtained by three extraction methods (ethanol, methanol, supercritical CO2), revealed similar inhibitory activities against a multitude of Gram-positive and Gram-negative MDR bacteria. Thin layer chromatography (TLC) assay of the leaf extract revealed a maximum of eight compounds with Rf values of 0.92, 0.86, 0.76, 0.53, 0.40, 0.25, 0.13, and 0.013, best visualized when inspected under UV-366 nm. TLC- agar overlay bioautography of the isolated compounds showed the compounds with Rf values of 0.86 and 0.13 having inhibitory activities against Gram-positive MDR bacteria (MRSA and VRE). The compound with an Rf value of 0.86 also possesses inhibitory activity against Gram-negative MDR bacteria (CRE Klebsiella pneumoniae and MBL Acinetobacter baumannii). Gas Chromatography-Mass Spectrometry (GC-MS) was able to identify six volatile compounds, four of which are new compounds that have not been mentioned in the medical literature. The chemical compounds isolated include 4-(2-propenyl)phenol and eugenol; and the new four compounds were ethyl diazoacetate, tris(trifluoromethyl)phosphine, heptafluorobutyrate, and 3-fluoro-2-propynenitrite. Phytochemical screening and investigation of its antioxidant, cytotoxic, possible hemolytic activities, and mechanisms of antibacterial activity were also done. The results showed that the local variant of Piper betle leaf extract possesses significant antioxidant, anti-cancer and antimicrobial properties, attributed to the presence of bioactive compounds, particularly of flavonoids (condensed tannin, leucoanthocyanin, gamma benzopyrone), anthraquinones, steroids/triterpenes and 2-deoxysugars. Piper betle L. is also traditionally known to enhance wound healing, which could be primarily due to its antioxidant, anti-inflammatory and antimicrobial activities. In vivo studies on mice using 2.5% and 5% of the ethanol leaf extract cream formulations in the excised wound models significantly increased the process of wound healing in the mice subjects, the results and values of which are at par with the current antibacterial cream (Mupirocin). From the results of the series of studies, we have definitely proven the value of Piper betle L. as a source of bioactive compounds that could be developed into therapeutic agents against MDR bacteria.Keywords: Philippine herbal medicine, multidrug-resistant bacteria, Piper betle, TLC-bioautography
Procedia PDF Downloads 76965 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 6064 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia
Authors: Andrew D. Henshaw
Abstract:
The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism
Procedia PDF Downloads 31363 Review of Urbanization Pattern in Kabul City
Authors: Muhammad Hanif Amiri, Edris Sadeqy, Ahmad Freed Osman
Abstract:
International Conference on Architectural Engineering and Skyscraper (ICAES 2016) on January 18 - 19, 2016 is aimed to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration. Therefore, we are very keen to participate and share our issues in order to get valuable feedbacks of the conference participants. Urbanization is a controversial issue all around the world. Substandard and unplanned urbanization has many implications on a social, cultural and economic situation of population life. Unplanned and illegal construction has become a critical issue in Afghanistan particularly Kabul city. In addition, lack of municipal bylaws, poor municipal governance, lack of development policies and strategies, budget limitation, low professional capacity of ainvolved private sector in development and poor coordination among stakeholders are the other factors which made the problem more complicated. The main purpose of this research paper is to review urbanization pattern of Kabul city and find out the improvement solutions and to evaluate the increasing of population density which caused vast illegal and unplanned development which finally converts the Kabul city to a slam area as the whole. The Kabul city Master Plan was reviewed in the year 1978 and revised for the planned 2million population. In 2001, the interim administration took place and the city became influx of returnees from neighbor countries and other provinces of Afghanistan mostly for the purpose of employment opportunities, security and better quality of life, therefore, Kabul faced with strange population growth. According to Central Statistics Organization of Afghanistan population of Kabul has been estimated approx. 5 million (2015), however a new Master Plan has been prepared in 2009, but the existing challenges have not been dissolved yet. On the other hand, 70% of Kabul population is living in unplanned (slam) area and facing the shortage of drinking water, inexistence of sewerage and drainage network, inexistence of proper management system for solid waste collection, lack of public transportation and traffic management, environmental degradation and the shortage of social infrastructure. Although there are many problems in Kabul city, but still the development of 22 townships are in progress which caused the great attraction of population. The research is completed with a detailed analysis on four main issues such as elimination of duplicated administrations, Development of regions, Rehabilitation and improvement of infrastructure, and prevention of new townships establishment in Kabul Central Core in order to mitigate the problems and constraints which are the foundation and principal to find the point of departure for an objective based future development of Kabul city. The closure has been defined to reflect the stage-wise development in light of prepared policy and strategies, development of a procedure for the improvement of infrastructure, conducting a preliminary EIA, defining scope of stakeholder’s contribution and preparation of project list for initial development. In conclusion this paper will help the transformation of Kabul city.Keywords: development of regions, illegal construction, population density, urbanization pattern
Procedia PDF Downloads 31962 A Review on Cyberchondria Based on Bibliometric Analysis
Authors: Xiaoqing Peng, Aijing Luo, Yang Chen
Abstract:
Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches
Procedia PDF Downloads 12261 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 960 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 6059 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14958 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism
Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani
Abstract:
This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.
Procedia PDF Downloads 18157 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control
Authors: Marco Frieslaar, Bing Chu, Eric Rogers
Abstract:
Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation
Procedia PDF Downloads 26456 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography
Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou
Abstract:
Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol
Procedia PDF Downloads 27255 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38754 The Underground Ecosystem of Credit Card Frauds
Authors: Abhinav Singh
Abstract:
Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem
Procedia PDF Downloads 43953 Language Anxiety and Learner Achievement among University Undergraduates in Sri Lanka: A Case Study of University of Sri Jayewardenepura
Authors: Sujeeva Sebastian Pereira
Abstract:
Language Anxiety (LA) – a distinct psychological construct of self-perceptions and behaviors related to classroom language learning – is perceived as a significant variable highly correlated with Second Language Acquisition (SLA). However, the existing scholarship has inadequately explored the nuances of LA in relation to South Asia, especially in terms of Sri Lankan higher education contexts. Thus, the current study, situated within the broad areas of Psychology of SLA and Applied Linguistics, investigates the impact of competency-based LA and identity-based LA on learner achievement among undergraduates of Sri Lanka. Employing a case study approach to explore the impact of LA, 750 undergraduates of the University of Sri Jayewardenepura, Sri Lanka, thus covering 25% of the student population from all seven faculties of the university, were selected as participants using stratified proportionate sampling in terms of ethnicity, gender, and disciplines. The qualitative and quantitative research inquiry utilized for data collection include a questionnaire consisting a set of structured and unstructured questions, and semi-structured interviews as research instruments. Data analysis includes both descriptive and statistical measures. As per the quantitative measures of data analysis, the study employed Pearson Correlation Coefficient test, Chi-Square test, and Multiple Correspondence Analysis; it used LA as the dependent variable, and two types of independent variables were used: direct and indirect variables. Direct variables encompass the four main language skills- reading, writing, speaking and listening- and test anxiety. These variables were further explored through classroom activities on grammar, vocabulary and individual and group presentations. Indirect variables are identity, gender and cultural stereotypes, discipline, social background, income level, ethnicity, religion and parents’ education level. Learner achievement was measured through final scores the participants have obtained for Compulsory English- a common first-year course unit mandatory for all undergraduates. LA was measured using the FLCAS. In order to increase the validity and reliability of the study, data collected were triangulated through descriptive content analysis. Clearly evident through both the statistical analysis and the qualitative analysis of the results is the significant linear negative correlation between LA and learner achievement, and the significant negative correlation between LA and culturally-operated gender stereotypes which create identity disparities in learners. The study also found that both competency-based LA and identity-based LA are experienced primarily and inescapably due to the apprehensions regarding speaking in English. Most participants who reported high levels of LA were from an urban socio-economic background of lower income families. Findings exemplify the linguistic inequality prevalent in the socio-cultural milieu in Sri Lankan society. This inequality makes learning English a dire need, yet, very much an anxiety provoking process because of many sociolinguistic, cultural and ideological factors related to English as a Second Language (ESL) in Sri Lanka. The findings bring out the intricate interrelatedness of both the dependent variable (LA) and the independent variables stated above, emphasizing that the significant linear negative correlation between LA and learner achievement is connected to the affective, cognitive and sociolinguistic domains of SLA. Thus, the study highlights the promise in linguistic practices such as code-switching, crossing and accommodating hybrid identities as strategies in minimizing LA and maximizing the experience of ESL.Keywords: language anxiety, identity-based anxiety, competence-based anxiety, TESL, Sri Lanka
Procedia PDF Downloads 19052 Innovative Grafting of Polyvinylpyrrolidone onto Polybenzimidazole Proton Exchange Membranes for Enhanced High-Temperature Fuel Cell Performance
Authors: Zeyu Zhou, Ziyu Zhao, Xiaochen Yang, Ling AI, Heng Zhai, Stuart Holmes
Abstract:
As a promising sustainable alternative to traditional fossil fuels, fuel cell technology is highly favoured due to its enhanced working efficiency and reduced emissions. In the context of high-temperature fuel cells (operating above 100 °C), the most commonly used proton exchange membrane (PEM) is the Polybenzimidazole (PBI) doped phosphoric acid (PA) membrane. Grafting is a promising strategy to advance PA-doped PBI PEM technology. The existing grafting modification on PBI PEMs mainly focuses on grafting phosphate-containing or alkaline groups onto the PBI molecular chains. However, quaternary ammonium-based grafting approaches face a common challenge. To initiate the N-alkylation reaction, deacidifying agents such as NaH, NaOH, KOH, K2CO3, etc., can lead to ionic crosslinking between the quaternary ammonium group and PBI. Polyvinylpyrrolidone (PVP) is another widely used polymer, the N-heterocycle groups within PVP endow it with a significant ability to absorb PA. Recently, PVP has attracted substantial attention in the field of fuel cells due to its reduced environmental impact and impressive fuel cell performance. However, due to the the poor compatibility of PVP in PBI, few research apply PVP in PA-doped PBI PEMs. This work introduces an innovative strategy to graft PVP onto PBI to form a network-like polymer. Due to the absence of quaternary ammonium groups, PVP does not pose issues related to crosslinking with PBI. Moreover, the nitrogen-containing functional groups on PVP provide PBI with a robust phosphoric acid retention ability. The nuclear magnetic resonance (NMR) hydrogen spectrum analysis results indicate the successful completion of the grafting reaction where N-alkylation reactions happen on both sides of the grafting agent 1,4-bis(chloromethyl)benzene. On one side, the reaction takes place with the hydrogen atoms on the imidazole groups of PBI, while on the other side, it reacts with the terminal amino group of PVP. The XPS results provide additional evidence from the perspective of the element. On synthesized PBI-g-PVP surfaces, there is an absence of chlorine (chlorine in grafting agent 1,4-bis(chloromethyl)benzene is substituted) element but a presence of sulfur element (sulfur element in terminal amino PVP appears in PBI), which demonstrates the occurrence of the grafting reaction and PVP is successfully grafted onto PBI. Prepare these modified membranes into MEA. It was found that during the fuel cell operation, all the grafted membranes showed substantial improvement in maximum current density and peak power density compared to unmodified one. For PBI-g-PVP 30, with a grafting degree of 22.4%, the peak power density reaches 1312 mW cm⁻², marking a 59.6% enhancement compared to the pristine PBI membrane. The improvement is caused by the improved PA binding ability of the membrane after grafting. The AST test result shows that the grafting membranes have better long-term durability and performance than unmodified membranes attributed to the presence of added PA binding sites, which can effectively prevent the PA leaching caused by proton migration. In conclusion, the test results indicate that grafting PVP onto PBI is a promising strategy which can effectively improve the fuel cell performance.Keywords: fuel cell, grafting modification, PA doping ability, PVP
Procedia PDF Downloads 7951 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany
Authors: Karin Schakib-Ekbatan, Annette Roser
Abstract:
According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings
Procedia PDF Downloads 12650 A Comparison of Videography Tools and Techniques in African and International Contexts
Authors: Enoch Ocran
Abstract:
Film Pertinence maintains consistency in storytelling by sustaining the natural flow of action while evoking a particular feeling or emotion from the viewers with selected motion pictures. This study presents a thorough investigation of "Film Pertinence" in videography that examines its influence in Africa and around the world. This research delves into the dynamic realm of visual storytelling through film, with a specific focus on the concept of Film Pertinence (FP). The study’s primary objectives are to conduct a comparative analysis of videography tools and techniques employed in both African and international contexts, examining how they contribute to the achievement of organizational goals and the enhancement of cultural awareness. The research methodology includes a comprehensive literature review, interviews with videographers from diverse backgrounds in Africa and the international arena, and the examination of pertinent case studies. The investigation aims to elucidate the multifaceted nature of videographic practices, with particular attention to equipment choices, visual storytelling techniques, cultural sensitivity, and adaptability. This study explores the impact of cultural differences on videography choices, aiming to promote understanding between African and foreign filmmakers and create more culturally sensitive films. It also explores the role of technology in advancing videography practices, resource allocation, and the influence of globalization on local filmmaking practices. The research also contributes to film studies by analyzing videography's impact on storytelling, guiding filmmakers to create more compelling narratives. The findings can inform film education, tailoring curricula to regional needs and opportunities. The study also encourages cross-cultural collaboration in the film industry by highlighting convergence and divergence in videography practices. At its core, this study seeks to explore the implications of film pertinence as a framework for videographic practice. It scrutinizes how cultural expression, education, and storytelling transcend geographical boundaries on a global scale. By analyzing the interplay between tools, techniques, and context, the research illuminates the ways in which videographers in Africa and worldwide apply film Pertinence principles to achieve cross-cultural communication and effectively capture the objectives of their clients. One notable focus of this paper is on the techniques employed by videographers in West Africa to emphasize storytelling and participant engagement, showcasing the relevance of FP in highlighting cultural awareness in visual storytelling. Additionally, the study highlights the prevalence of film pertinence in African agricultural documentaries produced for esteemed organizations such as the Roundtable on Sustainable Palm Oil (RSPO), Proforest, World Food Program, Fidelity Bank Ghana, Instituto BVRio, Aflatoun International, and the Solidaridad Network. These documentaries serve to promote prosperity, resilience, human rights, sustainable farming practices, community respect, and environmental preservation, underlining the vital role of film in conveying these critical messages. In summary, this research offers valuable insights into the evolving landscape of videography in different contexts, emphasizing the significance of film pertinence as a unifying principle in the pursuit of effective visual storytelling and cross-cultural communication.Keywords: film pertinence, Africa, cultural awareness, videography tools
Procedia PDF Downloads 6749 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data
Authors: Abhisek Chakrabarty, Subhraprakash Mandal
Abstract:
Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin
Procedia PDF Downloads 31748 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression
Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro
Abstract:
The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies
Procedia PDF Downloads 106