Search results for: processes of transformation
991 Identifying the Challenges of Subcontractors Management in Building Area Projects and Providing Solutions (Supply Chain Management Approach)
Authors: Hamideh Sadat Zekri, Seyed Mojtaba Hosseinalipour, Mohammadreza Hafezi
Abstract:
Nowadays, an organization cannot usually overcome all tasks singly due to the increasing complexity and vast expanse of projects, increment in uncertainty of activities, fast advances in technology, advent and influence of various factors in decision-making and implication of projects, and competitive atmosphere of different affairs. Thus, firms proceed to outsource the tasks to subcontractors. Nevertheless, large Iranian contracting companies suffer from extra consumed costs and time owing to conflicts between the activities of suppliers and subcontractors. The paucity of coordination in planning and execution, scarcity of coordination among suppliers, subcontractors, and the main contractor during the implementation of construction activities and also the lack of proper management of the aforesaid situation result in the growth of contradictions, number of claims, and legal issues in a project and consequently impose enormous expenses on those companies. Regarding the prosperity of supply chain management in other industries, its importance is increasingly getting appreciated in the field of construction. The ultimate aim of supply chain management is an effective delivery of the best value for customers, which is achievable by encouraging the members to interact and collaborate. In the present research, there was an effort to obtain a set of relevant challenges in the managing of subcontractors by identifying the main contractors and subcontractors and their role in the execution of projects and the supply chain management in the construction industry. Then, some of those challenges were selected in accordance with the views of industry professionals and academic experts. In the next step, a questionnaire was prepared and completed based on the analytic hierarchy process (AHP) and the challenges were prioritized. When it comes to subcontractors, the findings of the research demonstrate that difficulties in timely payments, alterations in approved drawings and the lack of rectification of job after completion by the subcontractor, paucity of a predetermined and legal process for qualifications of subcontractors, neglecting the supply chain processes in material procurement from producers, and delays in delivery of works by a subcontractor are the most significant problems. Finally, some solutions for encountering, eradicating, or reducing of mentioned problems are presented in accordance with previous studies and a survey from specialists.Keywords: main contractors, subcontractors, supply chain management, construction supply chain, analytic hierarchy process, solution
Procedia PDF Downloads 63990 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management
Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing
Abstract:
Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.Keywords: digital twin, infrastructure asset management, maturity model, smart city
Procedia PDF Downloads 157989 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 249988 Layouting Phase II of New Priok Using Adaptive Port Planning Frameworks
Authors: Mustarakh Gelfi, Tiedo Vellinga, Poonam Taneja, Delon Hamonangan
Abstract:
The development of New Priok/Kalibaru as an expansion terminal of the old port has been being done by IPC (Indonesia Port Cooperation) together with the subsidiary company, Port Developer (PT Pengembangan Pelabuhan Indonesia). As stated in the master plan, from 2 phases that had been proposed, phase I has shown its form and even Container Terminal I has been operated in 2016. It was planned principally, the development will be divided into Phase I (2013-2018) consist of 3 container terminals and 2 product terminals and Phase II (2018-2023) consist of 4 container terminals. In fact, the master plan has to be changed due to some major uncertainties which were escaped in prediction. This study is focused on the design scenario of phase II (2035- onwards) to deal with future uncertainty. The outcome is the robust design of phase II of the Kalibaru Terminal taking into account the future changes. Flexibility has to be a major goal in such a large infrastructure project like New Priok in order to deal and manage future uncertainty. The phasing of project needs to be adapted and re-look frequently before being irrelevant to future challenges. One of the frameworks that have been developed by an expert in port planning is Adaptive Port Planning (APP) with scenario-based planning. The idea behind APP framework is the adaptation that might be needed at any moment as an answer to a challenge. It is a continuous procedure that basically aims to increase the lifespan of waterborne transport infrastructure by increasing flexibility in the planning, contracting and design phases. Other methods used in this study are brainstorming with the port authority, desk study, interview and site visit to the real project. The result of the study is expected to be the insight for the port authority of Tanjung Priok over the future look and how it will impact the design of the port. There will be guidelines to do the design in an uncertain environment as well. Solutions of flexibility can be divided into: 1 - Physical solutions, all the items related hard infrastructure in the projects. The common things in this type of solution are using modularity, standardization, multi-functional, shorter and longer design lifetime, reusability, etc. 2 - Non-physical solutions, usually related to the planning processes, decision making and management of the projects. To conclude, APP framework seems quite robust to deal with the problem of designing phase II of New Priok Project for such a long period.Keywords: Indonesia port, port's design, port planning, scenario-based planning
Procedia PDF Downloads 240987 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes
Authors: Chih Ming Tseng
Abstract:
Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation
Procedia PDF Downloads 28986 Travel Behaviour and Perceptions in Trips with a Ferry Connection
Authors: Trude Tørset, María Díez Gutiérrez
Abstract:
The west coast of Norway features numerous islands and fjords. Ferry services connect the roads when these features make the construction challenging. Currently, scientific effort is designated to assess potential ferry replacement projects along the European road E-39. The inconvenience of ferry dependency is imprecisely represented in the transport models, thus transport analyses of ferry replacement projects appear as guesstimates rather than reliable input to decision-making processes of such costly projects. Trips including ferry connections imply more inconvenient elements than just travel time and cost. The goal of this paper is to understand and explain the extra inconveniences associated to the dependency of the ferry. The first scientific approach is to identify the characteristics of the ferry travelers and their trips’ features, as well as whether the ferry represents an obstacle for some specific trip types. In doing so, a survey was conducted in 2011 in eight E-39 ferries and in 2013 in 18 ferries connecting different road categories. More than 20,000 passengers answered with their trip and socioeconomic characteristics. The travel patterns in the different ferry connections were compared. The analysis showed that the trip features differed based on the location of the ferry connections, yet independently of the road category. Additionally, the patterns were compared to the national travel survey to detect differences in the travel patterns due to the use of the ferry connections. The results showed that the share of commuting trips within the same travel time was lower if the ferry was part of the trip. The second scientific approach is to know how the different travelers perceive potential benefits for a ferry replacement project. In the 2011 survey, some of the questions were about the relevance of nine different benefits this project might bring. Travelers identified the better access to public services and job market as the most valuable benefits, followed by the reduced planning of the trip. In 2016, a follow-up survey in some of the ferry connections was carried out in order to investigate variations in travelers’ perceptions. The growing interest in ferry replacement projects might make travelers more aware of the potential benefits these would bring to their daily lives. This paper describes the travel behaviour of travelers using a ferry connection as part of their trips, as well as the potential inconveniences associated to these trips. The findings might provide valuable input to further development of transport models, concept evaluations and cost benefit analysis methods.Keywords: ferry connections, ferry trip, inconvenience costs, travel behaviour
Procedia PDF Downloads 227985 Care at the Intersection of Biomedicine and Traditional Chinese Medicine: Narratives of Integration, Negotiation, and Provision
Authors: Jessica Ding
Abstract:
The field of global health is currently advocating for a resurgence in the use of traditional medicines to improve people-centered care. Healthcare policies are rapidly changing in response; in China, the increasing presence of TCM in the same spaces as biomedicine has led to a new term: integrative medicine. However, the existence of TCM as a part of integrative medicine creates a pressing paradoxical tension where TCM is both seen as a marginalized system within ‘modern’ hospitals and as a modality worth integrating. Additionally, the impact of such shifts has not been fully explored: the World Health Organization for one focuses only on three angles —practices, products, and practitioners— with regards to traditional medicines. Through ten weeks of fieldwork conducted at an urban hospital in Shanghai, China, this research expands the perspective of existing strategies by looking at integrative care through a fourth lens: patients and families. The understanding of self-care, health-seeking behavior, and non-professional caregiving structures are critical to grasping the significance of traditional medicine for people-centered care. Indeed, those individual and informal health care expectations align with the very spaces and needs that traditional medicine has filled before such ideas of integration. It specifically looks at this issue via three processes that operationalize experiences of care: (1) how aspects of TCM are valued within integrative medicine, (2) how negotiations of care occur between patients and doctors, and (3) how 'good quality' caregiving presents in integrative clinical spaces. This research hopes to lend insight into how culturally embedded traditions, bureaucratic and institutional rationalities, and social patterns of health-seeking behavior influence care to shape illness experiences at the intersection of two medical modalities. This analysis of patients’ clinical and illness experiences serves to enrich the narratives of integrative medical care’s ability to provide patient-centered care to determine how international policies are realized at the individual level. This anthropological study of the integration of Traditional Chinese medicine in local contexts can reveal the extent to which global strategies, as promoted by the WHO and the Chinese government actually align with the expectations and perspectives of patients receiving care. Ultimately, this ethnographic analysis of a local Chinese context hopes to inform global policies regarding the future use and integration of traditional medicines.Keywords: emergent systems, global health, integrative medicine, traditional Chinese medicine, TCM
Procedia PDF Downloads 141984 A Critical Discourse Analysis of ‘Youth Radicalisation’: A Case of the Daily Nation Kenya Online Newspaper
Authors: Miraji H. Mohamed
Abstract:
The purpose of this study is to critique ‘radicalisation’ and more particularly ‘youth radicalisation’ by exploring its usage in online newspapers. ‘Radicalisation’ and ‘extremism’ have become the most common terms in terrorism studies since the 9/11 attacks. Regardless of the geographic location, when the word terrorism is used the terms ‘radicalisation’ and ‘extremism’ always follow to attempt to explore the journey of the perpetrators towards violence. These terms have come to represent a discourse of dominantly pejorative traits often used to describe spaces, groups, and processes identified as problematic. Even though ambiguously defined they feature widely in government documents, political statements, news articles, academic research, social media platforms, religious gatherings, and public discussions. Notably, ‘radicalisation’ and ‘extremism’ have been closely conflated with the term youth to form ‘youth radicalisation’ to refer to a discourse of ‘youth at risk’. The three terms largely continue to be used unquestioningly and interchangeably hence the reason why they are placed in single quotation marks to deliberately question their conventional usage. Albeit this comes timely in the Kenyan context where there has been a proliferation of academic and expert research on ‘youth radicalisation’ (used as a neutral label) without considering the political, cultural and socio-historical contexts that inform this label. This study seeks to draw these nuances by employing a genealogical approach that historicises and deconstructs ‘youth radicalisation’; and by applying a Discourse-Historical Approach (DHA) of Critical Discourse Analysis to analyse Kenyan online newspaper - The Daily Nation between 2015 and 2018. By applying the concept of representation to analyse written texts, the study reveals that the use of ‘youth radicalisation’ as a discursive strategy disproportionately affects young people especially those from cultural/ethnic/religious minority groups. Also, the ambiguous use of ‘radicalisation’ and ‘youth radicalisation’ by the media reinforces the discourse of ‘youth at risk’ which has become the major framework underpinning Countering Violent Extremism (CVE) interventions. Similarly, the findings indicate that the uncritical use of ‘youth radicalisation’ has been used to serve political interests; and has become an instrument of policing young people, thus contributing to their cultural shaping. From this, it is evident that the media could thwart rather than assist CVE efforts. By exposing the political nature of the three terms through evidence-based research, this study offers recommendations on how critical reflective reporting by the media could help to make CVE more nuanced.Keywords: discourse, extremism, radicalisation, terrorism, youth
Procedia PDF Downloads 129983 Research Trends in Fine Arts Education Dissertations in Turkey
Authors: Suzan Duygu Bedir Erişti
Abstract:
The present study tried to make a general evaluation of the dissertations conducted in the last decade in the field of art education in the Department of Fine Arts Education in the Institutes of Education Sciences in Turkey. In the study, most of the universities which involved an Institute of Education Sciences within their bodies in Turkey were reached. As a result, a total of a hundred dissertations conducted in the departments of Fine Arts Education at several universities (Anadolu, Gazi, Ankara, Marmara, Dokuz Eylul, Ondokuz Mayıs, Selcuk and Necmettin Erbakan) were determined via the open access systems of universities as well as via the Thesis Search System of Higher Education Council. Most of the dissertations were reached via the latter system, and in cases of failure, the dissertations were reached via the former system. Consequently, most of the dissertations which did not have any access restriction and which had appropriate content were reached. The dissertations reached were examined based on document analysis in terms of their research topics, research paradigms, contents, purposes, methodologies, data collection tools, and analysis techniques. The dissertations conducted in institutes of Education Sciences could be said to have demonstrated a development, especially in recent years with respect to their qualities. It was also found that a great majority of the dissertations were carried out at Gazi University and Marmara University and that a similar number of dissertations were conducted in other universities. When all the dissertations were taken into account, in general, they were found to differ a lot in their subject areas. In most of the dissertations, the quantitative paradigm was adopted, while especially in recent years, more importance has been given to methods based on the qualitative paradigm. In addition, most of the dissertations conducted with quantitative paradigm were structured based on the general survey model and experimental research model. In terms of statistical techniques, university-focused approaches were used. In some universities, advanced statistical techniques were applied, while in some other universities, there was a moderate use of statistical techniques. Most of the studies produced results generalizable to the levels of postgraduate education and elementary school education. The studies were generally structured in face-to-face teaching processes, while some of them were designed in environments which did not include results generalizable to the face-to-face education system. In the present study, it was seen that the dissertations conducted in the departments of Fine Arts Education at the Institutes of Education Sciences in Turkey did not involve application-based approaches which included art-based or visual research in terms of either research topic or methodology.Keywords: fine arts education, dissertations, evaluation of dissertations, research trends in fine arts education
Procedia PDF Downloads 197982 Selective Separation of Amino Acids by Reactive Extraction with Di-(2-Ethylhexyl) Phosphoric Acid
Authors: Alexandra C. Blaga, Dan Caşcaval, Alexandra Tucaliuc, Madalina Poştaru, Anca I. Galaction
Abstract:
Amino acids are valuable chemical products used in in human foods, in animal feed additives and in the pharmaceutical field. Recently, there has been a noticeable rise of amino acids utilization throughout the world to include their use as raw materials in the production of various industrial chemicals: oil gelating agents (amino acid-based surfactants) to recover effluent oil in seas and rivers and poly(amino acids), which are attracting attention for biodegradable plastics manufacture. The amino acids can be obtained by biosynthesis or from protein hydrolysis, but their separation from the obtained mixtures can be challenging. In the last decades there has been a continuous interest in developing processes that will improve the selectivity and yield of downstream processing steps. The liquid-liquid extraction of amino acids (dissociated at any pH-value of the aqueous solutions) is possible only by using the reactive extraction technique, mainly with extractants of organophosphoric acid derivatives, high molecular weight amines and crown-ethers. The purpose of this study was to analyse the separation of nine amino acids of acidic character (l-aspartic acid, l-glutamic acid), basic character (l-histidine, l-lysine, l-arginine) and neutral character (l-glycine, l-tryptophan, l-cysteine, l-alanine) by reactive extraction with di-(2-ethylhexyl)phosphoric acid (D2EHPA) dissolved in butyl acetate. The results showed that the separation yield is controlled by the pH value of the aqueous phase: the reactive extraction of amino acids with D2EHPA is possible only if the amino acids exist in aqueous solution in their cationic forms (pH of aqueous phase below the isoeletric point). The studies for individual amino acids indicated the possibility of selectively separate different groups of amino acids with similar acidic properties as a function of aqueous solution pH-value: the maximum yields are reached for a pH domain of 2–3, then strongly decreasing with the pH increase. Thus, for acidic and neutral amino acids, the extraction becomes impossible at the isolelectric point (pHi) and for basic amino acids at a pH value lower than pHi, as a result of the carboxylic group dissociation. From the results obtained for the separation from the mixture of the nine amino acids, at different pH, it can be observed that all amino acids are extracted with different yields, for a pH domain of 1.5–3. Over this interval, the extract contains only the amino acids with neutral and basic character. For pH 5–6, only the neutral amino acids are extracted and for pH > 6 the extraction becomes impossible. Using this technique, the total separation of the following amino acids groups has been performed: neutral amino acids at pH 5–5.5, basic amino acids and l-cysteine at pH 4–4.5, l-histidine at pH 3–3.5 and acidic amino acids at pH 2–2.5.Keywords: amino acids, di-(2-ethylhexyl) phosphoric acid, reactive extraction, selective extraction
Procedia PDF Downloads 431981 Preferences of Electric Buses in Public Transport; Conclusions from Real Life Testing in Eight Swedish Municipalities
Authors: Sven Borén, Lisiana Nurhadi, Henrik Ny
Abstract:
From a theoretical perspective, electric buses can be more sustainable and can be cheaper than fossil fuelled buses in city traffic. The authors have not found other studies based on actual urban public transport in Swedish winter climate. Further on, noise measurements from buses for the European market were found old. The aims of this follow-up study was therefore to test and possibly verify in a real-life environment how energy efficient and silent electric buses are, and then conclude on if electric buses are preferable to use in public transport. The Ebusco 2.0 electric bus, fitted with a 311 kWh battery pack, was used and the tests were carried out during November 2014-April 2015 in eight municipalities in the south of Sweden. Six tests took place in urban traffic and two took place in more of a rural traffic setting. The energy use for propulsion was measured via logging of the internal system in the bus and via an external charging meter. The average energy use turned out to be 8% less (0,96 kWh/km) than assumed in the earlier theoretical study. This rate allows for a 320 km range in public urban traffic. The interior of the bus was kept warm by a diesel heater (biodiesel will probably be used in a future operational traffic situation), which used 0,67 kWh/km in January. This verified that electric buses can be up to 25% cheaper when used in public transport in cities for about eight years. The noise was found to be lower, primarily during acceleration, than for buses with combustion engines in urban bus traffic. According to our surveys, most passengers and drivers appreciated the silent and comfortable ride and preferred electric buses rather than combustion engine buses. Bus operators and passenger transport executives were also positive to start using electric buses for public transport. The operators did however point out that procurement processes need to account for eventual risks regarding this new technology, along with personnel education. The study revealed that it is possible to establish a charging infrastructure for almost all studied bus lines. However, design of a charging infrastructure for each municipality requires further investigations, including electric grid capacity analysis, smart location of charging points, and tailored schedules to allow fast charging. In conclusion, electric buses proved to be a preferable alternative for all stakeholders involved in public bus transport in the studied municipalities. However, in order to electric buses to be a prominent support for sustainable development, they need to be charged either by stand-alone units or via an expansion of the electric grid, and the electricity should be made from new renewable sources.Keywords: sustainability, electric, bus, noise, greencharge
Procedia PDF Downloads 342980 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 212979 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 82978 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa
Authors: Yegnanew A. Shiferaw
Abstract:
Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility
Procedia PDF Downloads 202977 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes
Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun
Abstract:
Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces
Procedia PDF Downloads 146976 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors
Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn
Abstract:
For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene
Procedia PDF Downloads 447975 Kinetic Modelling of Fermented Probiotic Beverage from Enzymatically Extracted Annona Muricata Fruit
Authors: Calister Wingang Makebe, Wilson Ambindei Agwanande, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1 as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated, and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 82974 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 214973 Comics as an Intermediary for Media Literacy Education
Authors: Ryan C. Zlomek
Abstract:
The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.Keywords: comics, graphics novels, mass communication, media literacy, metacognition
Procedia PDF Downloads 298972 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition
Authors: Karolina Mazur, Stanislaw Kuciel
Abstract:
Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.Keywords: biocomposites, PLA, PHA, storage
Procedia PDF Downloads 265971 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19
Authors: Lan Cheng, Harry Qin, Yang Wang
Abstract:
Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis
Procedia PDF Downloads 114970 Developing a Sustainable System to Deliver Early Intervention for Emotional Health through Australian Schools
Authors: Rebecca-Lee Kuhnert, Ron Rapee
Abstract:
Up to 15% of Australian youth will experience an emotional disorder, yet relatively few get the help they need. Schools provide an ideal environment through which we can identify young people who are struggling and provide them with appropriate help. Universal mental health screening is a method by which all young people in school can be quickly assessed for emotional disorders, after which identified youth can be linked to appropriate health services. Despite the obvious logic of this process, universal mental health screening has received little scientific evaluation and even less application in Australian schools. This study will develop methods for Australian education systems to help identify young people (aged 9-17 years old) who are struggling with existing and emerging emotional disorders. Prior to testing, a series of focus groups will be run to get feedback and input from young people, parents, teachers, and mental health professionals. They will be asked about their thoughts on school-based screening methods and and how to best help students at risk of emotional distress. Schools (n=91) across New South Wales, Australia will be randomised to do either immediate screening (in May 2021) or delayed screening (in February 2022). Students in immediate screening schools will complete a long online mental health screener consisting of standard emotional health questionnaires. Ultimately, this large set of items will be reduced to a small number of items to form the final brief screener. Students who score in the “at-risk” range on any measure of emotional health problems will be identified to schools and offered pathways to relevant help according to the most accepted and approved processes identified by the focus groups. Nine months later, the same process will occur among delayed screening schools. At this same time, students in the immediate screening schools will complete screening for a second time. This will allow a direct comparison of the emotional health and help-seeking between youth whose schools had engaged in the screening and pathways to care process (immediate) and those whose schools had not engaged in the process (delayed). It is hypothesised that there will be a significant increase in students who receive help from mental health support services after screening, compared with baseline. It is also predicted that all students will show significantly less emotional distress after screening and access to pathways of care. This study will be an important contribution to Australian youth mental health prevention and early intervention by determining whether school screening leads to a greater number of young people with emotional disorders getting the help that they need and improving their mental health outcomes.Keywords: children and young people, early intervention, mental health, mental health screening, prevention, school-based mental health
Procedia PDF Downloads 96969 A Review on Stormwater Harvesting and Reuse
Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir
Abstract:
Australia is a country of some 7,700 million square kilometres with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban storm-water and treated wastewater. But till now it is not widely practiced in Australia, and particularly storm-water is neglected. In Australia, only 4% of storm-water and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As storm-water is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing storm-water recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of storm-water harvesting and reuse. Numerical modelling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes storm-water quantity to design the system components, and the hydraulic model helps to route the flow through storm-water infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a storm-water harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of storm-water harvesting and reuse such as available guidelines of storm-water harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of storm-water harvesting and reuse.Keywords: storm-water management, storm-water harvesting and reuse, numerical modelling, geographic information system, decision support system, database
Procedia PDF Downloads 372968 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 340967 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia
Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic
Abstract:
This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro
Procedia PDF Downloads 252966 Sustainable Concepts Applied in the Pre-Columbian Andean Architecture in Southern Ecuador
Authors: Diego Espinoza-Piedra, David Duran
Abstract:
All architectural and land use processes are framed in a cultural, social and geographical context. The present study analyzes the Andean culture before the Spanish conquest in southern Ecuador, in the province of Azuay. This area has been habited for more than 10.000 years. The Canari and the Inca cultures occupied Azuay close to the arrival of the Spanish conquers. The Inca culture was settled in the Andes Mountains. The Canari culture was established in the south of Ecuador, on the actual provinces of Azuay and Canar. In contrast with history and archeology, to the best of our knowledge, their architecture has not yet been studied in this area because of the lack of architectural structures. Consequently, the present research reviewed the land use and culture for architectonic interpretations. The two main architectural objects in these cultures were dwellings and public buildings. In the first case, housing was conceived as temporary. It had to stand as long as its inhabitants lived. Therefore, houses were built when a couple got married. The whole community started the construction through the so-called ‘minga’ or collective work. The construction materials were tree branches, reeds, agave, ground, and straw. So that when their owners aged and then died, this house was easily disarmed and overthrown. Their materials become part of the land for agriculture. Finally, this cycle was repeated indefinitely. In the second case, the buildings, which we can call public, have presented erroneous interpretations. They have been defined as temples. But according to our conclusions, they were places for temporary accommodation, storage of objects and products, and in some special cases, even astronomical observatories. These public buildings were settled along the important road system called ‘Capac-Nam’, currently declared by UNESCO as World Cultural Heritage. The buildings had different scales at regular distances. Also, they were established in special or strategic places, which constituted a system of observatories. These observatories allowed to determine the cycles or calendars (solar or lunar) necessary for the agricultural production, as well as other natural phenomena. Most of the current minimal existence of physical structures in quantity and state of conservation is at the level of foundations or pieces of walls. Therefore, this study was realized after the identification of the history and culture of the inhabitants of this Andean region.Keywords: Andean, pre-Colombian architecture, Southern Ecuador, sustainable
Procedia PDF Downloads 127965 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 339964 Effects of Heat Treatment on the Mechanical Properties of Kenaf Fiber
Authors: Paulo Teodoro De Luna Carada, Toru Fujii, Kazuya Okubo
Abstract:
Natural fibers have wide variety of uses (e.g., rope, paper, and building materials). One specific application of it is in the field of composite materials (i.e., green composites). Huge amount of research are being done in this field due to rising concerns in the harmful effects of synthetic materials to the environment. There are several natural fibers used in this field, one of which can be extracted from a plant called kenaf (Hibiscus cannabinus L.). Kenaf fiber is regarded as a good alternative because the plant is easy to grow and the fiber is easy to extract. Additionally, it has good properties. Treatments, which are classified as mechanical or chemical in nature, can be done in order to improve the properties of the fiber. The aim of this study is to assess the effects of heat treatment in kenaf fiber. It specifically aims to observe the effect in the tensile strength and modulus of the fiber. Kenaf fiber bundles with an average diameter of at most 100μm was used for this purpose. Heat treatment was done using a constant temperature oven with the following heating temperatures: (1) 160̊C, (2) 180̊C, and (3) 200̊C for a duration of one hour. As a basis for comparison, tensile test was first done to kenaf fibers without any heat treatment. For every heating temperature, three groups of samples were prepared. Two groups of which were for doing tensile test (one group was tested right after heat treatment while the remaining group was kept inside a closed container with relative humidity of at least 95% for two days). The third group was used to observe how much moisture the treated fiber will absorb when it is enclosed in a high moisture environment for two days. The results showed that kenaf fiber can retain its tensile strength when heated up to a temperature of 160̊C. However, when heated at a temperature of about 180̊C or higher, the tensile strength decreases significantly. The same behavior was observed for the tensile modulus of the fiber. Additionally, the fibers which were stored for two days absorbed nearly the same amount of moisture (about 20% of the dried weight) regardless of the heating temperature. Heat treatment might have damaged the fiber in some way. Additional test was done in order to see if the damage due to heat treatment is attributed to changes in the viscoelastic property of the fiber. The findings showed that kenaf fibers can be heated for at most 160̊C to attain good tensile strength and modulus. Additionally, heating the fiber at high temperature (>180̊C) causes changes in its viscoelastic property. The results of this study is significant for processes which requires heat treatment not only in kenaf fiber but might also be helpful for natural fibers in general.Keywords: heat treatment, kenaf fiber, natural fiber, mechanical properties
Procedia PDF Downloads 353963 Human Insecurity and Migration in the Horn of Africa: Causes and Decision Processes
Authors: Belachew Gebrewold
Abstract:
The Horn of Africa is marred by complex and systematic internal and external political, economic and social-cultural causes of conflict that result in internal displacement and migration. This paper engages with them and shows how such a study can help us to understand migration, both in this region and more generally. The conflict has occurred within states, between states, among proxies, between armies. Human insecurities as a result of the state collapse of Somalia, the rise of Islamic fundamentalism in the whole region, recurrent drought affecting the livelihoods of subsistence farmers as well as nomads, exposure to hunger, environmental degradation, youth unemployment, rapid growth of slums around big cities, and political repression (especially in Eritrea) have been driving various segments of the regional population into regional and international migration. Eritrea has been going through a brutal dictatorship which pushes many Eritreans to flee their country and be exposed to human trafficking, torture, detention, and agony on their way to Europe mainly through Egypt, Libya and Israel. Similarly, Somalia has been devastated since 1991 by unending civil war, state collapse, and radical Islamists. There are some important aspects to highlight in the conflict-migration nexus in the Horn of Africa: first, the main push factor for the Somalis and Eritreans to leave their countries and risk their lives is the physical insecurity they have been facing in their countries. Secondly, as a result of the conflict the economic infrastructure is massively destroyed. Investment is rare; job opportunities are out of sight. Thirdly, in such a grim situation the politically and economically induced decision to migrate is a household decision, not only an individual decision. Based on this third point this research study took place in the Horn of Africa between 2014 and 2016 during different occasions. The main objective of the research was to understanding how the increasing migration is affecting the socio-economic and socio-political environment, and conversely how the socio-economic and socio-political environments are increasing migration decisions; and whether and how these decisions are individual or family decisions. The main finding is the higher the human insecurity, the higher the family decision; the lower the human insecurity, the higher the individual decision. These findings apply not only to the Eritrean, Somali migrants but also to Ethiopian migrants. But the general impacts of migration on sending countries’ human security is quite mixed and complex.Keywords: Eritrea, Ethiopia, Horn of Africa, insecurity, migration, Somalia
Procedia PDF Downloads 277962 CO2 Capture in Porous Silica Assisted by Lithium
Authors: Lucero Gonzalez, Salvador Alfaro
Abstract:
Carbon dioxide (CO2) and methane (CH4) are considered as the compounds with higher abundance among the greenhouse gases (CO2, NOx, SOx, CxHx, etc.), due to its higher concentration, this two gases have a greater impact in the environment pollution and provokes global warming. So, recovery, disposal and subsequent reuse, are of great interest, especially from the ecological and health perspective. By one hand, porous inorganic materials are good candidates to capture gases, because these type of materials are higher stability from the point view of thermal, chemical and mechanical under adsorption gas processes. By another hand, during the design and the synthetic preparation of the porous materials is possible add other intrinsic properties (physicochemical and structural) by adding chemical compounds as dopants or using structured directed agents or surfactants to improve the porous structure, the above features allow to have alternative materials for separation, capture and storage of greenhouse gases. In this work, ordered mesoporous materials base silica were prepared using Surfynol as surfactant. The surfactant micelles are commonly used as self-assembly templates for the development of new structure porous silica’s, adding a variety of textures and structures. By another hand, the Surfynol is a commercial surfactant, is non-ionic, for that is necessary determine its critical micelles concentration (cmc) by the pyrene I1/I3 ratio method, before to prepare silica particles. One time known the CMC, a precursor gel was prepared via sol-gel process at room temperature using TEOS as silica precursor, NH4OH as catalyst, Surfynol as template and H2O as solvent. Then, the gel precursor was treatment hydrothermally in a Teflon-lined stainless steel autoclave with a volume of 100 mL and kept at 100 ºC for 24 h under static conditions in a convection oven. After that, the porous silica particles obtained were impregnated with lithium to improve the CO2 adsorption capacity. Then the silica particles were characterized physicochemical, morphology and structurally, by XRD, FTIR, BET and SEM techniques. The thermal stability and the CO2 adsorption capacity was evaluated by thermogravimetric analysis (TGA). According the results, we found that the Surfynol is a good candidate to prepare silica particles with an ordered structure. Also the TGA analysis shown that the particles has a good thermal stability in the range of 250 °C and 800 °C. The best materials had, the capacity to adsorbing 70 and 90 mg per gram of silica particles and its CO2 adsorption capacity depends on the way to thermal pretreatment of the porous silica before of the adsorption experiments and of the concentration of surfactant used during the synthesis of silica particles. Acknowledgments: This work was supported by SIP-IPN through project SIP-20161862.Keywords: CO2 adsorption, lithium as dopant, porous silica, surfynol as surfactant, thermogravimetric analysis
Procedia PDF Downloads 268