Search results for: root uptake models
514 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 100513 Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters
Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut
Abstract:
In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.Keywords: underwater vehicles, submarine, autonomous underwater vehicles, AUV, computational fluid dynamics, flow fields, pressure, turbulence, drag
Procedia PDF Downloads 91512 Artificial Intelligence: Reimagining Education
Authors: Silvia Zanazzi
Abstract:
Artificial intelligence (AI) has become an integral part of our world, transitioning from scientific exploration to practical applications that impact daily life. The emergence of generative AI is reshaping education, prompting new questions about the role of teachers, the nature of learning, and the overall purpose of schooling. While AI offers the potential for optimizing teaching and learning processes, concerns about discrimination and bias arising from training data and algorithmic decisions persist. There is a risk of a disconnect between the rapid development of AI and the goals of building inclusive educational environments. The prevailing discourse on AI in education often prioritizes efficiency and individual skill acquisition. This narrow focus can undermine the importance of collaborative learning and shared experiences. A growing body of research challenges this perspective, advocating for AI that enhances, rather than replaces, human interaction in education. This study aims to examine the relationship between AI and education critically. Reviewing existing research will identify both AI implementation’s potential benefits and risks. The goal is to develop a framework that supports the ethical and effective integration of AI into education, ensuring it serves the needs of all learners. The theoretical reflection will be developed based on a review of national and international scientific literature on artificial intelligence in education. The primary objective is to curate a selection of critical contributions from diverse disciplinary perspectives and/or an inter- and transdisciplinary viewpoint, providing a state-of-the-art overview and a critical analysis of potential future developments. Subsequently, the thematic analysis of these contributions will enable the creation of a framework for understanding and critically analyzing the role of artificial intelligence in schools and education, highlighting promising directions and potential pitfalls. The expected results are (1) a classification of the cognitive biases present in representations of AI in education and the associated risks and (2) a categorization of potentially beneficial interactions between AI applications and teaching and learning processes, including those already in use or under development. While not exhaustive, the proposed framework will serve as a guide for critically exploring the complexity of AI in education. It will help to reframe dystopian visions often associated with technology and facilitate discussions on fostering synergies that balance the ‘dream’ of quality education for all with the realities of AI implementation. The discourse on artificial intelligence in education, highlighting reductionist models rooted in fragmented and utilitarian views of knowledge, has the merit of stimulating the construction of alternative perspectives that can ‘return’ teaching and learning to education, human growth, and the well-being of individuals and communities.Keywords: education, artificial intelligence, teaching, learning
Procedia PDF Downloads 20511 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 242510 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 113509 Role of Indigenous Peoples in Climate Change
Authors: Neelam Kadyan, Pratima Ranga, Yogender
Abstract:
Indigenous people are the One who are affected by the climate change the most, although there have contributed little to its causes. This is largely a result of their historic dependence on local biological diversity, ecosystem services and cultural landscapes as a source of their sustenance and well-being. Comprising only four percent of the world’s population they utilize 22 percent of the world’s land surface. Despite their high exposure-sensitivity indigenous peoples and local communities are actively responding to changing climatic conditions and have demonstrated their resourcefulness and resilience in the face of climate change. Traditional Indigenous territories encompass up to 22 percent of the world’s land surface and they coincide with areas that hold 80 percent of the planet’s biodiversity. Also, the greatest diversity of indigenous groups coincides with the world’s largest tropical forest wilderness areas in the Americas (including Amazon), Africa, and Asia, and 11 percent of world forest lands are legally owned by Indigenous Peoples and communities. This convergence of biodiversity-significant areas and indigenous territories presents an enormous opportunity to expand efforts to conserve biodiversity beyond parks, which tend to benefit from most of the funding for biodiversity conservation. Tapping on Ancestral Knowledge Indigenous Peoples are carriers of ancestral knowledge and wisdom about this biodiversity. Their effective participation in biodiversity conservation programs as experts in protecting and managing biodiversity and natural resources would result in more comprehensive and cost effective conservation and management of biodiversity worldwide. Addressing the Climate Change Agenda Indigenous Peoples has played a key role in climate change mitigation and adaptation. The territories of indigenous groups who have been given the rights to their lands have been better conserved than the adjacent lands (i.e., Brazil, Colombia, Nicaragua, etc.). Preserving large extensions of forests would not only support the climate change objectives, but it would respect the rights of Indigenous Peoples and conserve biodiversity as well. A climate change agenda fully involving Indigenous Peoples has many more benefits than if only government and/or the private sector are involved. Indigenous peoples are some of the most vulnerable groups to the negative effects of climate change. Also, they are a source of knowledge to the many solutions that will be needed to avoid or ameliorate those effects. For example, ancestral territories often provide excellent examples of a landscape design that can resist the negatives effects of climate change. Over the millennia, Indigenous Peoples have developed adaptation models to climate change. They have also developed genetic varieties of medicinal and useful plants and animal breeds with a wider natural range of resistance to climatic and ecological variability.Keywords: ancestral knowledge, cost effective conservation, management, indigenous peoples, climate change
Procedia PDF Downloads 677508 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 172507 Institutional Cooperation to Foster Economic Development: Universities and Social Enterprises
Authors: Khrystyna Pavlyk
Abstract:
In the OECD countries, percentage of adults with higher education degrees has increased by 10 % during 2000-2010. Continuously increasing demand for higher education gives universities a chance of becoming key players in socio-economic development of a territory (region or city) via knowledge creation, knowledge transfer, and knowledge spillovers. During previous decade, universities have tried to support spin-offs and start-ups, introduced courses on sustainability and corporate social responsibility. While much has been done, new trends are starting to emerge in search of better approaches. Recently a number of universities created centers that conduct research in a field social entrepreneurship, which in turn underpin educational programs run at these universities. The list includes but is not limited to the Centre for Social Economy at University of Liège, Institute for Social Innovation at ESADE, Skoll Centre for Social Entrepreneurship at Oxford, Centre for Social Entrepreneurship at Rosklide, Social Entrepreneurship Initiative at INSEAD. Existing literature already examined social entrepreneurship centers in terms of position in the institutional structure, initial and additional funding, teaching initiatives, research achievements, and outreach activities. At the same time, Universities can become social enterprises themselves. Previous research revealed that universities use both business and social entrepreneurship models. Universities which are mainly driven by a social mission are more likely to transform into social entrepreneurial institutions. At the same time, currently, there is no clear understanding of what social entrepreneurship in higher education is about and thus social entrepreneurship in higher education needs to be studied and promoted at the same time. Main roles which socially oriented university can play in city development include: buyer (implementation of socially focused local procurement programs creates partnerships focused on local sustainable growth.); seller (centers created by universities can sell socially oriented goods and services, e.g. in consultancy.); employer (Universities can employ socially vulnerable groups.); business incubator (which will help current student to start their social enterprises). In the paper, we will analyze these in more detail. We will also examine a number of indicators that can be used to assess the impact, both direct and indirect, that universities can have on city's economy. At the same time, originality of this paper mainly lies not in methodological approaches used, but in countries evaluated. Social entrepreneurship is still treated as a relatively new phenomenon in post-transitional countries where social services were provided only by the state for many decades. Paper will provide data and example’s both from developed countries (the US and EU), and those located in CIS and CEE region.Keywords: social enterprise, university, regional economic development, comparative study
Procedia PDF Downloads 254506 A Road Map of Success for Differently Abled Adolescent Girls Residing in Pune, Maharashtra, India
Authors: Varsha Tol, Laila Garda, Neelam Bhardwaj, Malata Usar
Abstract:
In India, differently- abled girls suffer from a “dual stigma” of being female and physically challenged. The general consensus is that they are incapable of standing on their own two feet. It was observed that these girls do not have access to educational programs as most hostels do not keep them after the tenth grade. They are forced to return to a life of poverty and are often considered a liability by their families. Higher education is completely ignored. Parents focus on finding a husband and passing on their ‘burden’ to someone else. An innovative, intervention for differently-abled adolescent girls with the express purpose of mainstreaming them into society was started by Helplife. The objective was to enrich the lives of these differently abled adolescent girls through precise research, focused intervention and professionalism. This programme addresses physical, mental and social rehabilitation of the girls who come from impoverished backgrounds. These adolescents are reached by word of mouth, snowball technique and through the network of the NGO. Applications are invited from potential candidates which are scrutinized by a panel of experts. Selection criteria include her disability, socio-economic status, and desire and drive to make a difference in her own life. The six main areas of intervention are accommodation, education, health, professional courses, counseling and recreational activities. Each girl on an average resides in Helplife for a period of 2-3 years. Analysis of qualitative data collected at various time points indicates holistic development of character. A quality of life questionnaire showed a significant improvement in scores at three different time points in 75% of the current population under intervention i.e. 19 girls. Till date, 25 girls have successfully passed out from the intervention program completing their graduation/post-graduation. Currently, we have 19 differently abled girls housed in three flats in Pune district of Maharashtra. Out of which 14 girls are pursuing their graduation or post-graduation. Six of the girls are working in jobs in various sectors. In conclusion it may be noted with adequate support and guidance the sky is the limit. This journey of 12 years has been a learning for us with ups and downs modifying the intervention at every step. Helplife has a belief of impacting positively, individual lives of differently abled girls in order to empower them in a holistic manner. The intervention has a positive impact on differently abled girls. They serve as role models to other differently abled girls indicating that this is a road map to success by getting empowered to live with full potential and get integrated in the society in a dignified way.Keywords: differently-abled, dual-stigma, empowerment, youth
Procedia PDF Downloads 173505 Quantification of Lawsone and Adulterants in Commercial Henna Products
Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen
Abstract:
The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone
Procedia PDF Downloads 459504 Advancing Trustworthy Human-robot Collaboration: Challenges and Opportunities in Diverse European Industrial Settings
Authors: Margarida Porfírio Tomás, Paula Pereira, José Manuel Palma Oliveira
Abstract:
The decline in employment rates across sectors like industry and construction is exacerbated by an aging workforce. This has far-reaching implications for the economy, including skills gaps, labour shortages, productivity challenges due to physical limitations, and workplace safety concerns. To sustain the workforce and pension systems, technology plays a pivotal role. Robots provide valuable support to human workers, and effective human-robot interaction is essential. FORTIS, a Horizon project, aims to address these challenges by creating a comprehensive Human-Robot Interaction (HRI) solution. This solution focuses on multi-modal communication and multi-aspect interaction, with a primary goal of maintaining a human-centric approach. By meeting the needs of both human workers and robots, FORTIS aims to facilitate efficient and safe collaboration. The project encompasses three key activities: 1) A Human-Centric Approach involving data collection, annotation, understanding human behavioural cognition, and contextual human-robot information exchange. 2) A Robotic-Centric Focus addressing the unique requirements of robots during the perception and evaluation of human behaviour. 3) Ensuring Human-Robot Trustworthiness through measures such as human-robot digital twins, safety protocols, and resource allocation. Factor Social, a project partner, will analyse psycho-physiological signals that influence human factors, particularly in hazardous working conditions. The analysis will be conducted using a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. However, the adoption of novel technologies, particularly those involving human-robot interaction, often faces hurdles related to acceptance. To address this challenge, FORTIS will draw upon insights from Social Sciences and Humanities (SSH), including risk perception and technology acceptance models. Throughout its lifecycle, FORTIS will uphold a human-centric approach, leveraging SSH methodologies to inform the design and development of solutions. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No 101135707 (FORTIS).Keywords: skills gaps, productivity challenges, workplace safety, human-robot interaction, human-centric approach, social sciences and humanities, risk perception
Procedia PDF Downloads 52503 Comparing the Effectiveness of the Crushing and Grinding Route of Comminution to That of the Mine to Mill Route in Terms of the Percentage of Middlings Present in Processed Lead-Zinc Ore Samples
Authors: Chinedu F. Anochie
Abstract:
The presence of gangue particles in recovered metal concentrates has been a serious challenge to ore dressing engineers. Middlings lower the quality of concentrates, and in most cases, drastically affect the smelter terms, owing to exorbitant amounts paid by Mineral Processing industries as treatment charge. Models which encourage optimization of liberation operations have been utilized in most ore beneficiation industries to reduce the presence of locked particles in valuable concentrates. Moreover, methods such as incorporation of regrind mills, scavenger, rougher and cleaner cells, to the milling and flotation plants has been widely employed to tackle these concerns, and to optimize the grade–recovery relationship of metal concentrates. This work compared the crushing and grinding method of liberation, to the mine to mill route, by evaluating the proportion of middlings present in selectively processed complex Pb-Zn ore samples. To establish the effect of size reduction operations on the percentage of locked particles present in recovered concentrates, two similar samples of complex Pb- Zn ores were processed. Following blasting operation, the first ore sample was ground directly in a ball mill (Mine to Mill Route of Comminution), while the other sample was manually crushed, and subsequently ground in the ball mill (Crushing and Grinding Route of Comminution). The two samples were separately sieved in a mesh to obtain the desired representative particle sizes. An equal amount of each sample that would be processed in the flotation circuit was then obtained with the aid of a weighing balance. These weighed fine particles were simultaneously processed in the flotation circuit using the selective flotation technique. Sodium cyanide, Methyl isobutyl carbinol, Sodium ethyl xanthate, Copper sulphate, Sodium hydroxide, Lime and Isopropyl xanthate, were the reagents used to effect differential flotation of the two ore samples. Analysis and calculations showed that the degree of liberation obtained for the ore sample which went through the conventional crushing and grinding route of comminution, was higher than that of the directly milled run off mine (ROM) ore. Similarly, the proportion of middlings obtained from the separated galena (PbS) and sphalerite (ZnS) concentrates, were lower for the crushed and ground ore sample. A concise data which proved that the mine to mill method of size reduction is not the most ideal technique for the recovery of quality metal concentrates has been established.Keywords: comminution, degree of liberation, middlings, mine to mill
Procedia PDF Downloads 133502 The Communication of Audit Report: Key Audit Matters in United Kingdom
Authors: L. Sierra, N. Gambetta, M. A. Garcia-Benau, M. Orta
Abstract:
Financial scandals and financial crisis have led to an international debate on the value of auditing. In recent years there have been significant legislative reforms aiming to increase markets’ confidence in audit services. In particular, there has been a significant debate on the need to improve the communication of auditors with audit reports users as a way to improve its informative value and thus, to improve audit quality. The International Auditing and Assurance Standards Board (IAASB) has proposed changes to the audit report standards. The International Standard on Auditing 701, Communicating Key Audit Matters (KAM) in the Independent Auditor's Report, has introduced new concepts that go beyond the auditor's opinion and requires to disclose the risks that, from the auditor's point of view, are more significant in the audited company information. Focusing on the companies included in the Financial Times Stock Exchange 100 index, this study aims to focus on the analysis of the determinants of the number of KAM disclosed by the auditor in the audit report and moreover, the analysis of the determinants of the different type of KAM reported during the period 2013-2015. To test the hypotheses in the empirical research, two different models have been used. The first one is a linear regression model to identify the client’s characteristics, industry sector and auditor’s characteristics that are related to the number of KAM disclosed in the audit report. Secondly, a logistic regression model is used to identify the determinants of the number of each KAM type disclosed in the audit report; in line with the risk-based approach to auditing financial statements, we categorized the KAM in 2 groups: Entity-level KAM and Accounting-level KAM. Regarding the auditor’s characteristics impact on the KAM disclosure, the results show that PwC tends to report a larger number of KAM while KPMG tends to report less KAM in the audit report. Further, PwC reports a larger number of entity-level risk KAM while KPMG reports less account-level risk KAM. The results also show that companies paying higher fees tend to have more entity-level risk KAM and less account-level risk KAM. The materiality level is positively related to the number of account-level risk KAM. Additionally, these study results show that the relationship between client’s characteristics and number of KAM is more evident in account-level risk KAM than in entity-level risk KAM. A highly leveraged company carries a great deal of risk, but due to this, they are usually subject to strong capital providers monitoring resulting in less account-level risk KAM. The results reveal that the number of account-level risk KAM is strongly related to the industry sector in which the company operates assets. This study helps to understand the UK audit market, provides information to auditors and finally, it opens new research avenues in the academia.Keywords: FTSE 100, IAS 701, key audit matters, auditor’s characteristics, client’s characteristics
Procedia PDF Downloads 231501 Is Obesity Associated with CKD-(unknown) in Sri Lanka? A Protocol for a Cross Sectional Survey
Authors: Thaminda Liyanage, Anuga Liyanage, Chamila Kurukulasuriya, Sidath Bandara
Abstract:
Background: The burden of chronic kidney disease (CKD) is growing rapidly around the world, particularly in Asia. Over the last two decades Sri Lanka has experienced an epidemic of CKD with ever growing number of patients pursuing medical care due to CKD and its complications, specially in the “Mahaweli” river basin in north central region of the island nation. This was apparently a new form of CKD which was not attributable to conventional risk factors such as diabetes mellitus, hypertension or infection and widely termed as “CKD-unknown” or “CKDu”. In the past decade a number of small scale studies were conducted to determine the aetiology, prevalence and complications of CKDu in North Central region. These hospital-based studies did not provide an accurate estimate of the problem as merely 10% or less of the people with CKD are aware of their diagnosis even in developed countries with better access to medical care. Interestingly, similar observations were made on the changing epidemiology of obesity in the region but no formal study was conducted to date to determine the magnitude of obesity burden. Moreover, if increasing obesity in the region is associated with CKD epidemic is yet to be explored. Methods: We will conduct an area wide cross sectional survey among all adult residents of the “Mahaweli” development project area 5, in the North Central Province of Sri Lanka. We will collect relevant medical history, anthropometric measurements, blood and urine for hematological and biochemical analysis. We expect a participation rate of 75%-85% of all eligible participants. Participation in the study is voluntary, there will be no incentives provided for participation. Every analysis will be conducted in a central laboratory and data will be stored securely. We will calculate the prevalence of obesity and chronic kidney disease, overall and by stage using total number of participants as the denominator and report per 1000 population. The association of obesity and CKD will be assessed with regression models and will be adjusted for potential confounding factors and stratified by potential effect modifiers where appropriate. Results: This study will provide accurate information on the prevalence of obesity and CKD in the region. Furthermore, this will explore the association between obesity and CKD, although causation may not be confirmed. Conclusion: Obesity and CKD are increasingly recognized as major public health problems in Sri Lanka. Clearly, documenting the magnitude of the problem is the essential first step. Our study will provide this vital information enabling the government to plan a coordinated response to tackle both obesity and CKD in the region.Keywords: BMI, Chronic Kidney Disease, obesity, Sri Lanka
Procedia PDF Downloads 270500 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area
Authors: Sara Saboonian, Pierre Filion
Abstract:
There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization
Procedia PDF Downloads 131499 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 150498 The Prediction of Reflection Noise and Its Reduction by Shaped Noise Barriers
Authors: I. L. Kim, J. Y. Lee, A. K. Tekile
Abstract:
In consequence of the very high urbanization rate of Korea, the number of traffic noise damages in areas congested with population and facilities is steadily increasing. The current environmental noise levels data in major cities of the country show that the noise levels exceed the standards set for both day and night times. This research was about comparative analysis in search for optimal soundproof panel shape and design factor that can minimize sound reflection noise. In addition to the normal flat-type panel shape, the reflection noise reduction of swelling-type, combined swelling and curved-type, and screen-type were evaluated. The noise source model Nord 2000, which often provides abundant information compared to models for the similar purpose, was used in the study to determine the overall noise level. Based on vehicle categorization in Korea, the noise levels for varying frequency from different heights of the sound source (directivity heights of Harmonize model) have been calculated for simulation. Each simulation has been made using the ray-tracing method. The noise level has also been calculated using the noise prediction program called SoundPlan 7.2, for comparison. The noise level prediction was made at 15m (R1), 30 m (R2) and at middle of the road, 2m (R3) receiving the point. By designing the noise barriers by shape and running the prediction program by inserting the noise source on the 2nd lane to the noise barrier side, among the 6 lanes considered, the reflection noise slightly decreased or increased in all noise barriers. At R1, especially in the cases of the screen-type noise barriers, there was no reduction effect predicted in all conditions. However, the swelling-type showed a decrease of 0.7~1.2 dB at R1, performing the best reduction effect among the tested noise barriers. Compared to other forms of noise barriers, the swelling-type was thought to be the most suitable for reducing the reflection noise; however, since a slight increase was predicted at R2, further research based on a more sophisticated categorization of related design factors is necessary. Moreover, as swellings are difficult to produce and the size of the modules are smaller than other panels, it is challenging to install swelling-type noise barriers. If these problems are solved, its applicable region will not be limited to other types of noise barriers. Hence, when a swelling-type noise barrier is installed at a downtown region where the amount of traffic is increasing every day, it will both secure visibility through the transparent walls and diminish any noise pollution due to the reflection. Moreover, when decorated with shapes and design, noise barriers will achieve a visual attraction than a flat-type one and thus will alleviate any psychological hardships related to noise, other than the unique physical soundproofing functions of the soundproof panels.Keywords: reflection noise, shaped noise barriers, sound proof panel, traffic noise
Procedia PDF Downloads 509497 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars
Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea
Abstract:
Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag
Procedia PDF Downloads 325496 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping
Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco
Abstract:
Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction
Procedia PDF Downloads 225495 Approach to Freight Trip Attraction Areas Classification, in Developing Countries
Authors: Adrián Esteban Ortiz-Valera, Angélica Lozano
Abstract:
In developing countries, informal trade is relevant, but it has been little studied in urban freight transport (UFT) context, although it is a challenge due to the non- contemplated demand it produces and the operational limitations it imposes. Hence, UFT operational improvements (initiatives) and freight attraction models must consider informal trade for developing countries. Afour phasesapproach for characterizing the commercial areas in developing countries (considering both formal and informal establishments) is proposed and applied to ten areas in Mexico City. This characterization is required to calculate real freight trip attraction and then select and/or adapt suitable initiatives. Phase 1 aims the delimitation of the study area. The following information is obtained for each establishment of a potential area: location or geographic coordinates, industrial sector, industrial subsector, and number of employees. Phase 2 characterizes the study area and proposes a set of indicators. This allows a broad view of the operations and constraints of UFT in the study area. Phase 3 classifies the study area according to seven indicators. Each indicator represents a level of conflict in the area due to the presence of formal (registered) and informal establishments on the sidewalks and streets, affecting urban freight transport (and other activities). Phase 4 determines preliminary initiatives which could be implemented in the study area to improve the operation of UFT. The indicators and initiatives relation allows a preliminary initiatives selection. This relation requires to know the following: a) the problems in the area (congested streets, lack of parking space for freight vehicles, etc.); b) the factors which limit initiatives due to informal establishments (reduced streets for freight vehicles; mobility and parking inability during a period, among others), c) the problems in the area due to its physical characteristics; and d) the factors which limit initiatives due to regulations of the area. Several differences in the study areas were observed. As the indicators increases, the areas tend to be less ordered, and the limitations for the initiatives become higher, causing a smaller number of susceptible initiatives. In ordered areas (similar to the commercial areas of developed countries), the current techniquesfor estimating freight trip attraction (FTA) can bedirectly applied, however, in the areas where the level of order is lower due to the presence of informal trade, this is not recommended because the real FTA would not be estimated. Therefore, a technique, which consider the characteristics of the areas in developing countries to obtain data and to estimate FTA, is required. This estimation can be the base for proposing feasible initiatives to such zones. The proposed approach provides a wide view of the needs of the commercial areas of developing countries. The knowledge of these needs would allow UFT´s operation to be improved and its negative impacts to be minimized.Keywords: freight initiatives, freight trip attraction, informal trade, urban freight transport
Procedia PDF Downloads 141494 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 229493 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model
Authors: A. Shakoor, M. Arshad
Abstract:
The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.Keywords: groundwater quality, groundwater management, PMWIN, MT3D model
Procedia PDF Downloads 378492 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters
Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut
Abstract:
- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag
Procedia PDF Downloads 78491 Childhood Adversity and Delinquency in Youth: Self-Esteem and Depression as Mediators
Authors: Yuhui Liu, Lydia Speyer, Jasmin Wertz, Ingrid Obsuth
Abstract:
Childhood adversities refer to situations where a child's basic needs for safety and support are compromised, leading to substantial disruptions in their emotional, cognitive, social, or neurobiological development. Given the prevalence of adversities (8%-39%), their impact on developmental outcomes is challenging to completely avoid. Delinquency is an important consequence of childhood adversities, given its potential causing violence and other forms of victimisation, influencing victims, delinquents, their families, and the whole of society. Studying mediators helps explain the link between childhood adversity and delinquency, which aids in designing effective intervention programs that target explanatory variables to disrupt the path and mitigate the effects of childhood adversities on delinquency. The Dimensional Model of Adversity and Psychopathology suggests that threat-based adversities influence outcomes through emotion processing, while deprivation-based adversities do so through cognitive mechanisms. Thus, considering a wide range of threat-based and deprivation-based adversities and their co-occurrence and their associations with delinquency through cognitive and emotional mechanisms is essential. This study employs the Millennium Cohort Study, tracking the development of approximately 19,000 individuals born across England, Scotland, Wales and Northern Ireland, representing a nationally representative sample. Parallel mediation models compare the mediating roles of self-esteem (cognitive) and depression (affective) in the associations between childhood adversities and delinquency. Eleven types of childhood adversities were assessed both individually and through latent class analysis, considering adversity experiences from birth to early adolescence. This approach aimed to capture how threat-based, deprived-based, or combined threat and deprived-based adversities are associated with delinquency. Eight latent classes were identified: three classes (low adversity, especially direct and indirect violence; low childhood and moderate adolescent adversities; and persistent poverty with declining bullying victimisation) were negatively associated with delinquency. In contrast, three classes (high parental alcohol misuse, overall high adversities, especially regarding household instability, and high adversity) were positively associated with delinquency. When mediators were included, all classes showed a significant association with delinquency through depression, but not through self-esteem. Among the eleven single adversities, seven were positively associated with delinquency, with five linked through depression and none through self-esteem. The results imply the importance of affective variables, not just for threat-based but also deprivation-based adversities. Academically, this suggests exploring other mechanisms linking adversities and delinquency since some adversities are linked through neither depression nor self-esteem. Clinically, intervention programs should focus on affective variables like depression to mitigate the effects of childhood adversities on delinquency.Keywords: childhood adversity, delinquency, depression, self-esteem
Procedia PDF Downloads 32490 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231489 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 117488 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332487 Family Income and Parental Behavior: Maternal Personality as a Moderator
Authors: Robert H. Bradley, Robert F. Corwyn
Abstract:
There is abundant research showing that socio-economic status is implicated in parenting. However, additional factors such as family context, parent personality, parenting history and child behavior also help determine how parents enact the role of caregiver. Each of these factors not only helps determine how a parent will act in a given situation, but each can serve to moderate the influence of the other factors. Personality has long been studied as a factor that influences parental behavior, but it has almost never been considered as a moderator of family contextual factors. For this study, relations between three maternal personality characteristics (agreeableness, extraversion, neuroticism) and four aspects of parenting (harshness, sensitivity, stimulation, learning materials) were examined when children were 6 months, 36 months, and 54 months old and again at 5th grade. Relations between these three aspects of personality and the overall home environment were also examined. A key concern was whether maternal personality characteristics moderated relations between household income and the four aspects of parenting and between household income and the overall home environment. The data for this study were taken from the NICHD Study of Early Child Care and Youth Development (NICHD SECCYD). The total sample consisted of 1364 families living in ten different sites in the United States. However, the samples analyzed included only those with complete data on all four parenting outcomes (i.e., sensitivity, harshness, stimulation, and provision of learning materials), income, maternal education and all three measures of personality (i.e., agreeableness, neuroticism, extraversion) at each age examined. Results from hierarchical regression analysis showed that mothers high in agreeableness were more likely to demonstrate sensitivity and stimulation as well as provide more learning materials to their children but were less likely to manifest harshness. Maternal agreeableness also consistently moderated the effects of low income on parental behavior. Mothers high in extraversion were more likely to provide stimulation and learning materials, with extraversion serving as a moderator of low income on both. By contrast, mothers high in neuroticism were less likely to demonstrate positive aspects of parenting and more likely to manifest negative aspects (e.g., harshness). Neuroticism also served to moderate the influence of low income on parenting, especially for stimulation and learning materials. The most consistent effects of parent personality were on the overall home environment, with significant main and interaction effects observed in 11 of the 12 models tested. These findings suggest that it may behoove professional who work with parents living in adverse circumstances to consider parental personality in helping to better target prevention or intervention efforts aimed at supporting parental efforts to act in ways that benefit children.Keywords: home environment, household income, learning materials, personality, sensitivity, stimulation
Procedia PDF Downloads 211486 The Beneficial Effects of Inhibition of Hepatic Adaptor Protein Phosphotyrosine Interacting with PH Domain and Leucine Zipper 2 on Glucose and Cholesterol Homeostasis
Authors: Xi Chen, King-Yip Cheng
Abstract:
Hypercholesterolemia, characterized by high low-density lipoprotein cholesterol (LDL-C), raises cardiovascular events in patients with type 2 diabetes (T2D). Although several drugs, such as statin and PCSK9 inhibitors, are available for the treatment of hypercholesterolemia, they exert detrimental effects on glucose metabolism and hence increase the risk of T2D. On the other hand, the drugs used to treat T2D have minimal effect on improving the lipid profile. Therefore, there is an urgent need to develop treatments that can simultaneously improve glucose and lipid homeostasis. Adaptor protein phosphotyrosine interacting with PH domain and leucine zipper 2 (APPL2) causes insulin resistance in the liver and skeletal muscle via inhibiting insulin and adiponectin actions in animal models. Single-nucleotide polymorphisms in the APPL2 gene were associated with LDL-C, non-alcoholic fatty liver disease, and coronary artery disease in humans. The aim of this project is to investigate whether APPL2 antisense oligonucleotide (ASO) can alleviate dietary-induced T2D and hypercholesterolemia. High-fat diet (HFD) was used to induce obesity and insulin resistance in mice. GalNAc-conjugated APPL2 ASO (GalNAc-APPL2-ASO) was used to silence hepatic APPL2 expression in C57/BL6J mice selectively. Glucose, lipid, and energy metabolism were monitored. Immunoblotting and quantitative PCR analysis showed that GalNAc-APPL2-ASO treatment selectively reduced APPL2 expression in the liver instead of other tissues, like adipose tissues, kidneys, muscle, and heart. The glucose tolerance test and insulin sensitivity test revealed that GalNAc-APPL2-ASO improved glucose tolerance and insulin sensitivity progressively. Blood chemistry analysis revealed that the mice treated with GalNAc-APPL2-ASO had significantly lower circulating levels of total cholesterol and LDL cholesterol. However, there was no difference in circulating levels of high-density lipoprotein (HDL) cholesterol, triglyceride, and free fatty acid between the mice treated with GalNac-APPL2-ASO and GalNAc-Control-ASO. No obvious effect on food intake, body weight, and liver injury markers after GalNAc-APPL2-ASO treatment was found, supporting its tolerability and safety. We showed that selectively silencing hepatic APPL2 alleviated insulin resistance and hypercholesterolemia and improved energy metabolism in the dietary-induced obese mouse model, indicating APPL2 as a therapeutic target for metabolic diseases.Keywords: APPL2, antisense oligonucleotide, hypercholesterolemia, type 2 diabetes
Procedia PDF Downloads 67485 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 58