Search results for: logistics network optimization
430 Celebrating Community Heritage through the People’s Collection Wales: A Case Study in the Development of Collecting Traditions and Engagement
Authors: Gruffydd E. Jones
Abstract:
The world’s largest collection of historical, cultural, and heritage material is unarchived and undocumented in the hands of the public. Not only does this material represent the missing collections in heritage sector archives today, but it is also the key to providing a diverse range of communities with the means to express their history in their own words and to celebrate their unique, personal heritage. The People’s Collection Wales (PCW) acts as a platform on which the heritage of Wales and her people can be collated and shared, at the heart of which is a thriving community engagement programme across a network of museums, archives, and libraries. By providing communities with the archival skillset commonly employed throughout the heritage sector, PCW enables local projects, societies, and individuals to express their understanding of local heritage with their own voices, empowering communities to embrace their diverse and complex identities around Wales. Drawing on key examples from the project’s history, this paper will demonstrate the successful way in which museums have been developed as hubs for community engagement where the public was at the heart of collection and documentation activities, informing collection and curatorial policies to benefit both the institute and its local community. This paper will also highlight how collections from marginalised, under-represented, and minority communities have been published and celebrated extensively around Wales, including adoption by the education system in classrooms today. Any activity within the heritage sector, whether of collection, preservation, digitisation, or accessibility, should be considerate of community engagement opportunities not only to remain relevant but in order to develop as community hubs, pivots around which local heritage is supported and preserved. Attention will be drawn to our digitisation workflow, which, through training and support from museums and libraries, has allowed the public not only to become involved but to actively lead the contemporary evolution of documentation strategies in Wales. This paper will demonstrate how the PCW online access archive is promoting museum collections, encouraging user interaction, and providing an invaluable platform on which a broader community can inform, preserve and celebrate their cultural heritage through their own archival material too. The continuing evolution of heritage engagement depends wholly on placing communities at the heart of the sector, recognising their wealth of cultural knowledge, and developing the archival skillset necessary for them to become archival practitioners of their own.Keywords: social history, cultural heritage, community heritage, museums, archives, libraries, community engagement, oral history, community archives
Procedia PDF Downloads 94429 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components
Authors: Najeh Lakhoua
Abstract:
Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture
Procedia PDF Downloads 204428 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis
Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi
Abstract:
The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH researchKeywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis
Procedia PDF Downloads 91427 The Risk of Prioritizing Management over Education at Japanese Universities
Authors: Masanori Kimura
Abstract:
Due to the decline of the 18-year-old population, Japanese universities have a tendency to convert their form of employment from tenured positions to fixed-term positions for newly hired teachers. The advantage of this is that universities can be more flexible in their employment plans in case they fail to fill the enrollment of quotas of prospective students or they need to supplement teachers who can engage in other academic fields or research areas where new demand is expected. The most serious disadvantage of this, however, is that if secure positions cannot be provided to faculty members, there is the possibility that coherence of education and continuity of research supported by the university cannot be achieved. Therefore, the question of this presentation is as follows: Are universities aiming to give first priority to management, or are they trying to prioritize educational and research rather than management? To answer this question, the author examined the number of job offerings for college foreign language teachers posted on the JREC-IN (Japan Research Career Information Network, which is run by Japan Science and Technology Agency) website from April 2012 to October 2015. The results show that there were 1,002 and 1,056 job offerings for tenured positions and fixed-term contracts respectively, suggesting that, overall, today’s Japanese universities show a tendency to give first priority to management. More detailed examinations of the data, however, show that the tendency slightly varies depending on the types of universities. National universities which are supported by the central government and state universities which are supported by local governments posted more job offerings for tenured positions than for fixed-term contracts: national universities posted 285 and 257 job offerings for tenured positions and fixed-term contracts respectively, and state universities posted 106 and 86 job offerings for tenured positions and fixed-term contracts respectively. Yet the difference in number between the two types of employment status at national and state universities is marginal. As for private universities, they posted 713 job offerings for fixed-term contracts and 616 offerings for tenured positions. Moreover, 73% of the fixed-term contracts were offered for low rank positions including associate professors, lectures, and so forth. Generally speaking, those positions are offered to younger teachers. Therefore, this result indicates that private universities attempt to cut their budgets yet expect the same educational effect by hiring younger teachers. Although the results have shown that there are some differences in personal strategies among the three types of universities, the author argues that all three types of universities may lose important human resources that will take a pivotal role at their universities in the future unless they urgently review their employment strategies.Keywords: higher education, management, employment status, foreign language education
Procedia PDF Downloads 134426 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 144425 Effects of Forest Therapy on Depression among Healthy Adults
Authors: Insook Lee, Heeseung Choi, Kyung-Sook Bang, Sungjae Kim, Minkyung Song, Buhyun Lee
Abstract:
Backgrounds: A clearer and comprehensive understanding of the effects of forest therapy on depression is needed for further refinements of forest therapy programs. The purpose of this study was to review the literature on forest therapy programs designed to decrease the level of depression among adults to evaluate current forest therapy programs. Methods: This literature review was conducted using various databases including PubMed, EMBASE, CINAHL, PsycArticle, KISS, RISS, and DBpia to identify relevant studies published up to January 2016. The two authors independently screened the full text articles using the following criteria: 1) intervention studies assessing the effects of forest therapy on depression among healthy adults ages 18 and over; 2) including at least one control group or condition; 3) being peer-reviewed; and 4) being published either in English. The Scottish Intercollegiate Guideline Network (SIGN) measurement tool was used to assess the risk of bias in each trial. Results: After screening current literature, a total of 14 articles (English: 6, Korean: 8) were included in the present review. None of the studies used randomized controlled (RCT) study design and the sample size ranged from 11 to 300. Walking in the forest and experiencing the forest using the five senses was the key component of the forest therapy that was included in all studies. The majority of studies used one-time intervention that usually lasted a few hours or half-day. The most widely used measure for depression was Profile of Mood States (POMS). Most studies used self-reported, paper-and-pencil tests, and only 5 studies used both paper-and-pencil tests and physiological measures. Regarding the quality assessment based on the SIGN criteria, only 3 articles were rated ‘acceptable’ and the rest of the 14 articles were rated ‘low quality.’ Regardless of the diversity in format and contents of forest therapies, most studies showed a significant effect of forest therapy in curing depression. Discussions: This systematic review showed that forest therapy is one of the emerging and effective intervention approaches for decreasing the level of depression among adults. Limitations of the current programs identified from the review were as follows; 1) small sample size; 2) a lack of objective and comprehensive measures for depression; and 3) inadequate information about research process. Futures studies assessing the long-term effect of forest therapy on depression using rigorous study designs are needed.Keywords: forest therapy, systematic review, depression, adult
Procedia PDF Downloads 292424 Endotracheal Intubation Self-Confidence: Report of a Realistic Simulation Training
Authors: Cleto J. Sauer Jr., Rita C. Sauer, Chaider G. Andrade, Doris F. Rabelo
Abstract:
Introduction: Endotracheal Intubation (ETI) is a procedure for clinical management of patients with severe clinical presentation of COVID-19 disease. Realistic simulation (RS) is an active learning methodology utilized for clinical skill's improvement. To improve ETI skills of public health network's physicians from Recôncavo da Bahia region in Brazil, during COVID-19 outbreak, RS training was planned and carried out. Training scenario included the Nasco Lifeform realistic simulator, and three actions were simulated: ETI procedure, sedative drugs management, and bougie guide utilization. Training intervention occurred between May and June 2020, as an interinstitutional cooperation between the Health's Department of Bahia State and the Federal University from Recôncavo da Bahia. Objective: The main objective is to report the effects on participants' self-confidence perception for ETI procedure after RS based training. Methods: This is a descriptive study, with secondary data extracted from questionnaires applied throughout RS training. Priority workplace, time from last intubation, and knowledge about bougie were reported on a preparticipation questionnaire. Additionally, participants completed pre- and post-training qualitative self-assessment (10-point Likert scale) regarding self-confidence perception in performing each of simulated actions. Distribution analysis for qualitative data was performed with Wilcoxon Signed Rank Test, and self-confidence increase analysis in frequency contingency tables with Fisher's Exact Test. Results: 36 physicians participated of training, 25 (69%) from primary care setting, 25 (69%) performed ETI over a year ago, and only 4 (11%) had previous knowledge about the bougie guide utilization. There was an increase in self-confidence medians for all three simulated actions. Medians (variation) for self-confidence before and after training, for each simulated action were as follows: ETI [5 (1-9) vs. 8 (6-10) (p < 0.0001)]; Sedative drug management [5 (1-9) vs. 8 (4-10) (p < 0.0001)]; Bougie guide utilization [2.5 (1-7) vs. 8 (4-10) (p < 0.0001)]. Among those who performed ETI over a year ago (n = 25), an increase in self-confidence greater than 3 points for ETI was reported by 23 vs. 2 physicians (p = 0.0002), and by 21 vs. 4 (p = 0.03) for sedative drugs management. Conclusions: RS training contributed to self-confidence increase in performing ETI. Among participants who performed ETI over a year, there was a significant association between RS training and increase of more than 3 points in self-confidence, both for ETI and sedative drug management. Training with RS methodology is suitable for ETI confidence enhancement during COVID-19 outbreak.Keywords: confidence, COVID-19, endotracheal intubation, realistic simulation
Procedia PDF Downloads 140423 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media
Procedia PDF Downloads 105422 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows
Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman
Abstract:
The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer
Procedia PDF Downloads 126421 Effect of Several Soil Amendments on Water Quality in Mine Soils: Leaching Columns
Authors: Carmela Monterroso, Marc Romero-Estonllo, Carlos Pascual, Beatriz Rodríguez-Garrido
Abstract:
The mobilization of heavy metals from polluted soils causes their transfer to natural waters, with consequences for ecosystems and human health. Phytostabilization techniques are applied to reduce this mobility, through the establishment of a vegetal cover and the application of soil amendments. In this work, the capacity of different organic amendments to improve water quality and reduce the mobility of metals in mine-tailings was evaluated. A field pilot test was carried out with leaching columns installed on an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE/ Phy2SUDOE Projects (SOE1/P5/E0189 and SOE4/P5/E1021)). Ten columns (1 meter high by 25 cm in diameter) were packed with untreated mine tailings (control) or those treated with organic amendments. Applied amendments were based on different combinations of municipal wastes, bark chippings, biomass fly ash, and nanoparticles like aluminum oxides or ferrihydrite-type iron oxides. During the packing of the columns, rhizon-samplers were installed at different heights (10, 20, and 50 cm) from the top, and pore water samples were obtained by suction. Additionally, in each column, a bottom leachate sample was collected through a valve installed at the bottom of the column. After packing, the columns were sown with grasses. Water samples were analyzed for: pH and redox potential, using combined electrodes; salinity by conductivity meter: bicarbonate by titration, sulfate, nitrate, and chloride, by ion chromatography (Dionex 2000); phosphate by colorimetry with ammonium molybdate/ascorbic acid; Ca, Mg, Fe, Al, Mn, Zn, Cu, Cd, and Pb by flame atomic absorption/emission spectrometry (Perkin Elmer). Porewater and leachate from the control columns (packed with unamended mine tailings) were extremely acidic and had a high concentration of Al, Fe, and Cu. In these columns, no plant development was observed. The application of organic amendments improved soil conditions, which allowed the establishment of a dense cover of grasses in the rest of the columns. The combined effect of soil amendment and plant growth had a positive impact on water quality and reduced mobility of aluminum and heavy metals.Keywords: leaching, organic amendments, phytostabilization, polluted soils
Procedia PDF Downloads 110420 Best Practice for Post-Operative Surgical Site Infection Prevention
Authors: Scott Cavinder
Abstract:
Surgical site infections (SSI) are a known complication to any surgical procedure and are one of the most common nosocomial infections. Globally it is estimated 300 million surgical procedures take place annually, with an incidence of SSI’s estimated to be 11 of 100 surgical patients developing an infection within 30 days after surgery. The specific purpose of the project is to address the PICOT (Problem, Intervention, Comparison, Outcome, Time) question: In patients who have undergone cardiothoracic or vascular surgery (P), does implementation of a post-operative care bundle based on current EBP (I) as compared to current clinical agency practice standards (C) result in a decrease of SSI (O) over a 12-week period (T)? Synthesis of Supporting Evidence: A literature search of five databases, including citation chasing, was performed, which yielded fourteen pieces of evidence ranging from high to good quality. Four common themes were identified for the prevention of SSI’s including use and removal of surgical dressings; use of topical antibiotics and antiseptics; implementation of evidence-based care bundles, and implementation of surveillance through auditing and feedback. The Iowa Model was selected as the framework to help guide this project as it is a multiphase change process which encourages clinicians to recognize opportunities for improvement in healthcare practice. Practice/Implementation: The process for this project will include recruiting postsurgical participants who have undergone cardiovascular or thoracic surgery prior to discharge at a Northwest Indiana Hospital. The patients will receive education, verbal instruction, and return demonstration. The patients will be followed for 12 weeks, and wounds assessed utilizing the National Healthcare Safety Network//Centers for Disease Control (NHSN/CDC) assessment tool and compared to the SSI rate of 2021. Key stakeholders will include two cardiovascular surgeons, four physician assistants, two advance practice nurses, medical assistant and patients. Method of Evaluation: Chi Square analysis will be utilized to establish statistical significance and similarities between the two groups. Main Results/Outcomes: The proposed outcome is the prevention of SSIs in the post-op cardiothoracic and vascular patient. Implication/Recommendation(s): Implementation of standardized post operative care bundles in the prevention of SSI in cardiovascular and thoracic surgical patients.Keywords: cardiovascular, evidence based practice, infection, post-operative, prevention, thoracic, surgery
Procedia PDF Downloads 83419 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 336418 Rheological Study of Chitosan/Montmorillonite Nanocomposites: The Effect of Chemical Crosslinking
Authors: K. Khouzami, J. Brassinne, C. Branca, E. Van Ruymbeke, B. Nysten, G. D’Angelo
Abstract:
The development of hybrid organic-inorganic nanocomposites has recently attracted great interest. Typically, polymer silicates represent an emerging class of polymeric nanocomposites that offer superior material properties compared to each compound alone. Among these materials, complexes based on silicate clay and polysaccharides are one of the most promising nanocomposites. The strong electrostatic interaction between chitosan and montmorillonite can induce what is called physical hydrogel, where the coordination bonds or physical crosslinks may associate and dissociate reversibly and in a short time. These mechanisms could be the main origin of the uniqueness of their rheological behavior. However, owing to their structure intrinsically heterogeneous and/or the lack of dissipated energy, they are usually brittle, possess a poor toughness and may not have sufficient mechanical strength. Consequently, the properties of these nanocomposites cannot respond to some requirements of many applications in several fields. To address the issue of weak mechanical properties, covalent chemical crosslink bonds can be introduced to the physical hydrogel. In this way, quite homogeneous dually crosslinked microstructures with high dissipated energy and enhanced mechanical strength can be engineered. In this work, we have prepared a series of chitosan-montmorillonite nanocomposites chemically crosslinked by addition of poly (ethylene glycol) diglycidyl ether. This study aims to provide a better understanding of the mechanical behavior of dually crosslinked chitosan-based nanocomposites by relating it to their microstructures. In these systems, the variety of microstructures is obtained by modifying the number of cross-links. Subsequently, a superior uniqueness of the rheological properties of chemically crosslinked chitosan-montmorillonite nanocomposites is achieved, especially at the highest percentage of clay. Their rheological behaviors depend on the clay/chitosan ratio and the crosslinking. All specimens exhibit a viscous rheological behavior over the frequency range investigated. The flow curves of the nanocomposites show a Newtonian plateau at very low shear rates accompanied by a quite complicated nonlinear decrease with increasing the shear rate. Crosslinking induces a shear thinning behavior revealing the formation of network-like structures. Fitting shear viscosity curves via Ostward-De Waele equation disclosed that crosslinking and clay addition strongly affect the pseudoplasticity of the nanocomposites for shear rates γ ̇>20.Keywords: chitosan, crossliking, nanocomposites, rheological properties
Procedia PDF Downloads 148417 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong
Abstract:
This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 219416 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 24415 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 52414 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 73413 Giant Cancer Cell Formation: A Link between Cell Survival and Morphological Changes in Cancer Cells
Authors: Rostyslav Horbay, Nick Korolis, Vahid Anvari, Rostyslav Stoika
Abstract:
Introduction: Giant cancer cells (GCC) are common in all types of cancer, especially after poor therapy. Some specific features of such cells include ~10-fold enlargement, drug resistance, and the ability to propagate similar daughter cells. We used murine NK/Ly lymphoma, an aggressive and fast growing lymphoma model that has already shown drastic changes in GCC comparing to parental cells (chromatin condensation, nuclear fragmentation, tighter OXPHOS/cellular respiration coupling, multidrug resistance). Materials and methods: In this study, we compared morpho-functional changes of GCC that predominantly show either a cytostatic or a cytotoxic effect after treatment with drugs. We studied the effect of a combined cytostatic/cytotoxic drug treatment to determine the correlation of drug efficiency and GCC formation. Doses of G1/S-specific drug paclitaxel/PTX (G2/M-specific, 50 mg/mouse), vinblastine/VBL (50 mg/mouse), and DNA-targeting agents doxorubicin/DOX (125 ng/mouse) and cisplatin/CP (225 ng/mouse) on C57 black mice. Several tests were chosen to estimate morphological and physiological state (propidium iodide, Rhodamine-123, DAPI, JC-1, Janus Green, Giemsa staining and other), which included cell integrity, nuclear fragmentation and chromatin condensation, mitochondrial activity, and others. A single and double factor ANOVA analysis were performed to determine correlation between the criteria of applied drugs and cytomorphological changes. Results: In all cases of treatment, several morphological changes were observed (intracellular vacuolization, membrane blebbing, and interconnected mitochondrial network). A lower gain in ascites (49.97% comparing to control group) and longest lifespan (22+9 days) after tumor injection was obtained with single VBL and single DOX injections. Such ascites contained the highest number of GCC (83.7%+9.2%), lowest cell count number (72.7+31.0 mln/ml), and a strong correlation coefficient between increased mitochondrial activity and percentage of giant NK/Ly cells. A high number of viable GCC (82.1+9.2%) was observed compared to the parental forms (15.4+11.9%) indicating that GCC are more drug resistant than the parental cells. All this indicates that the giant cell formation and its ability to obtain drug resistance is an expanding field in cancer research.Keywords: ANOVA, cisplatin, doxorubicin, drug resistance, giant cancer cells, NK/Ly lymphoma, paclitaxel, vinblastine
Procedia PDF Downloads 217412 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining
Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri
Abstract:
In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.Keywords: educational data mining, Facebook, learning styles, personality traits
Procedia PDF Downloads 231411 Oxidative Stress Related Alteration of Mitochondrial Dynamics in Cellular Models
Authors: Orsolya Horvath, Laszlo Deres, Krisztian Eros, Katalin Ordog, Tamas Habon, Balazs Sumegi, Kalman Toth, Robert Halmosi
Abstract:
Introduction: Oxidative stress induces an imbalance in mitochondrial fusion and fission processes, finally leading to cell death. The two antioxidant molecules, BGP-15 and L2286 have beneficial effects on mitochondrial functions and on cellular oxidative stress response. In this work, we studied the effects of these compounds on the processes of mitochondrial quality control. Methods: We used H9c2 cardiomyoblast and isolated neonatal rat cardiomyocytes (NRCM) for the experiments. The concentration of stressors and antioxidants was beforehand determined with MTT test. We applied 1-Methyl-3-nitro-1-nitrosoguanidine (MNNG) in 125 µM, 400 µM and 800 µM concentrations for 4 and 8 hours on H9c2 cells. H₂O₂ was applied in 150 µM and 300 µM concentration for 0.5 and 4 hours on both models. L2286 was administered in 10 µM, while BGP-15 in 50 µM doses. Cellular levels of the key proteins playing role in mitochondrial dynamics were measured in Western blot samples. For the analysis of mitochondrial network dynamics, we applied electron microscopy and immunocytochemistry. Results: Due to MNNG treatment the level of fusion proteins (OPA1, MFN2) decreased, while the level of fission protein DRP1 elevated markedly. The levels of fusion proteins OPA1 and MNF2 increased in the L2286 and BGP-15 treated groups. During the 8 hour treatment period, the level of DRP1 also increased in the treated cells (p < 0.05). In the H₂O₂ stressed cells, administration of L2286 increased the level of OPA1 in both H9c2 and NRCM models. MFN2 levels in isolated neonatal rat cardiomyocytes raised considerably due to BGP-15 treatment (p < 0.05). L2286 administration decreased the DRP1 level in H9c2 cells (p < 0.05). We observed that the H₂O₂-induced mitochondrial fragmentation could be decreased by L2286 treatment. Conclusion: Our results indicated that the PARP-inhibitor L2286 has beneficial effect on mitochondrial dynamics during oxidative stress scenario, and also in the case of directly induced DNA damage. We could make the similar conclusions in case of BGP-15 administration, which, via reducing ROS accumulation, propagates fusion processes, this way aids preserving cellular viability. Funding: GINOP-2.3.2-15-2016-00049; GINOP-2.3.2-15-2016-00048; GINOP-2.3.3-15-2016-00025; EFOP-3.6.1-16-2016-00004; ÚNKP-17-4-I-PTE-209Keywords: H9c2, mitochondrial dynamics, neonatal rat cardiomyocytes, oxidative stress
Procedia PDF Downloads 152410 On-Farm Biopurification Systems: Fungal Bioaugmentation of Biomixtures For Carbofuran Removal
Authors: Carlos E. Rodríguez-Rodríguez, Karla Ruiz-Hidalgo, Kattia Madrigal-Zúñiga, Juan Salvador Chin-Pampillo, Mario Masís-Mora, Elizabeth Carazo-Rojas
Abstract:
One of the main causes of contamination linked to agricultural activities is the spillage and disposal of pesticides, especially during the loading, mixing or cleaning of agricultural spraying equipment. One improvement in the handling of pesticides is the use of biopurification systems (BPS), simple and cheap degradation devices where the pesticides are biologically degraded at accelerated rates. The biologically active core of BPS is the biomixture, which is constituted by soil pre-exposed to the target pesticide, a lignocellulosic substrate to promote the activity of ligninolitic fungi and a humic component (peat or compost), mixed at a volumetric proportion of 50:25:25. Considering the known ability of lignocellulosic fungi to degrade a wide range of organic pollutants, and the high amount of lignocellulosic waste used in biomixture preparation, the bioaugmentation of biomixtures with these fungi represents an interesting approach for improving biomixtures. The present work aimed at evaluating the effect of the bioaugmentation of rice husk based biomixtures with the fungus Trametes versicolor in the removal of the insectice/nematicide carbofuran (CFN) and to optimize the composition of the biomixture to obtain the best performance in terms of CFN removal and mineralization, reduction in formation of transformation products and decrease in residual toxicity of the matrix. The evaluation of several lignocellulosic residues (rice husk, wood chips, coconut fiber, sugarcane bagasse or newspaper print) revealed the best colonization by T. versicolor in rice husk. Pre-colonized rice husk was then used in the bioaugmentation of biomixtures also containing soil pre-exposed to CFN and either peat (GTS biomixture) or compost (GCS biomixture). After spiking with 10 mg/kg CBF, the efficiency of the biomixture was evaluated through a multi-component approach that included: monitoring of CBF removal and production of CBF transformation products, mineralization of radioisotopically labeled carbofuran (14C-CBF) and changes in the toxicity of the matrix after the treatment (Daphnia magna acute immobilization test). Estimated half-lives of CBF in the biomixtures were 3.4 d and 8.1 d in GTS and GCS, respectively. The transformation products 3-hydroxycarbofuran and 3-ketocarbofuran were detected at the moment of CFN application, however their concentration continuously disappeared. Mineralization of 14C-CFN was also faster in GTS than GCS. The toxicological evaluation showed a complete toxicity removal in the biomixtures after 48 d of treatment. The composition of the GCS biomixture was optimized using a central composite design and response surface methodology. The design variables were the volumetric content of fungally pre-colonized rice husk and the volumetric ratio compost/soil. According to the response models, maximization of CFN removal and mineralization rate, and minimization in the accumulation of transformation products were obtained with an optimized biomixture of composition 30:43:27 (pre-colonized rice husk:compost:soil), which differs from the 50:25:25 composition commonly employed in BPS. Results suggest that fungal bioaugmentation may enhance the performance of biomixtures in CFN removal. Optimization reveals the importance of assessing new biomixture formulations in order to maximize their performance.Keywords: bioaugmentation, biopurification systems, degradation, fungi, pesticides, toxicity
Procedia PDF Downloads 311409 Analysis of the Homogeneous Turbulence Structure in Uniformly Sheared Bubbly Flow Using First and Second Order Turbulence Closures
Authors: Hela Ayeb Mrabtini, Ghazi Bellakhal, Jamel Chahed
Abstract:
The presence of the dispersed phase in gas-liquid bubbly flow considerably alters the liquid turbulence. The bubbles induce turbulent fluctuations that enhance the global liquid turbulence level and alter the mechanisms of turbulence. RANS modeling of uniformly sheared flows on an isolated sphere centered in a control volume is performed using first and second order turbulence closures. The sphere is placed in the production-dissipation equilibrium zone where the liquid velocity is set equal to the relative velocity of the bubbles. The void fraction is determined by the ratio between the sphere volume and the control volume. The analysis of the turbulence statistics on the control volume provides numerical results that are interpreted with regard to the effect of the bubbles wakes on the turbulence structure in uniformly sheared bubbly flow. We assumed for this purpose that at low void fraction where there is no hydrodynamic interaction between the bubbles, the single-phase flow simulation on an isolated sphere is representative on statistical average of a sphere network. The numerical simulations were firstly validated against the experimental data of bubbly homogeneous turbulence with constant shear and then extended to produce numerical results for a wide range of shear rates from 0 to 10 s^-1. These results are compared with our turbulence closure proposed for gas-liquid bubbly flows. In this closure, the turbulent stress tensor in the liquid is split into a turbulent dissipative part produced by the gradient of the mean velocity which also contains the turbulence generated in the bubble wakes and a pseudo-turbulent non-dissipative part induced by the bubbles displacements. Each part is determined by a specific transport equation. The simulations of uniformly sheared flows on an isolated sphere reproduce the mechanisms related to the turbulent part, and the numerical results are in perfect accordance with the modeling of the transport equation of the turbulent part. The reduction of second order turbulence closure provides a description of the modification of turbulence structure by the bubbles presence using a dimensionless number expressed in terms of two-time scales characterizing the turbulence induced by the shear and that induced by bubbles displacements. The numerical simulations carried out in the framework of a comprehensive analysis reproduce particularly the attenuation of the turbulent friction showed in the experimental results of bubbly homogeneous turbulence subjected to a constant shear.Keywords: gas-liquid bubbly flows, homogeneous turbulence, turbulence closure, uniform shear
Procedia PDF Downloads 460408 Criminal Law and Internet of Things: Challenges and Threats
Authors: Celina Nowak
Abstract:
The development of information and communication technologies (ICT) and a consequent growth of cyberspace have become a reality of modern societies. The newest addition to this complex structure has been Internet of Things which is due to the appearance of smart devices. IoT creates a new dimension of the network, as the communication is no longer the domain of just humans, but has also become possible between devices themselves. The possibility of communication between devices, devoid of human intervention and real-time supervision, generated new societal and legal challenges. Some of them may and certainly will eventually be connected to criminal law. Legislators both on national and international level have been struggling to cope with this technologically evolving environment in order to address new threats created by the ICT. There are legal instruments on cybercrime, however imperfect and not of universal scope, sometimes referring to specific types of prohibited behaviors undertaken by criminals, such as money laundering, sex offences. However, the criminal law seems largely not prepared to the challenges which may arise because of the development of IoT. This is largely due to the fact that criminal law, both on national and international level, is still based on the concept of perpetration of an offence by a human being. This is a traditional approach, historically and factually justified. Over time, some legal systems have developed or accepted the possibility of commission of an offence by a corporation, a legal person. This is in fact a legal fiction, as a legal person cannot commit an offence as such, it needs humans to actually behave in a certain way on its behalf. Yet, the legislators have come to understand that corporations have their own interests and may benefit from crime – and therefore need to be penalized. This realization however has not been welcome by all states and still give rise to doubts of ontological and theoretical nature in many legal systems. For this reason, in many legislations the liability of legal persons for commission of an offence has not been recognized as criminal responsibility. With the technological progress and the growing use of IoT the discussions referring to criminal responsibility of corporations seem rather inadequate. The world is now facing new challenges and new threats related to the ‘smart’ things. They will have to be eventually addressed by legislators if they want to, as they should, to keep up with the pace of technological and societal evolution. This will however require a reevaluation and possibly restructuring of the most fundamental notions of modern criminal law, such as perpetration, guilt, participation in crime. It remains unclear at this point what norms and legal concepts will be and may be established. The main goal of the research is to point out to the challenges ahead of the national and international legislators in the said context and to attempt to formulate some indications as to the directions of changes, having in mind serious threats related to privacy and security related to the use of IoT.Keywords: criminal law, internet of things, privacy, security threats
Procedia PDF Downloads 162407 Ecolabelling : Normative Power or Corporate Strategy? : A Study Case of Textile Company in Indonesia
Authors: Suci Lestari Yuana, Shofi Fatihatun Sholihah, Derarika Ensta Jesse
Abstract:
Textile is one of buyer-driven industry which rely on label trust from the consumers. Most of textile manufacturers produce textile and textile products based on consumer demands. The company’s policy is highly depend on the dynamic evolution of consumers behavior. Recently, ecofriendly has become one of the most important factor of western consumers to purchase the textile and textile product (TPT) from the company. In that sense, companies from developing countries are encouraged to follow western consumers values. Some examples of ecolabel certificate are ISO (International Standard Organisation), Lembaga Ekolabel Indonesia (Indonesian Ecolabel Instution) and Global Ecolabel Network (GEN). The submission of national company to international standard raised a critical question whether this is a reflection towards the legitimation of global norms into national policy or it is actually a practical strategy of the company to gain global consumer. By observing one of the prominent textile company in Indonesia, this research is aimed to discuss what kind of impetus factors that cause a company to use ecolabel and what is the meaning behind it. Whether it comes from normative power or the strategy of the company. This is a qualitative research that choose a company in Sukoharjo, Central Java, Indonesia as a case study in explaining the pratice of ecolabelling by textitle company. Some deep interview is conducted with the company in order to get to know the ecolabelling process. In addition, this research also collected some document which related to company’s ecolabelling process and its impact to company’s value. The finding of the project reflected issues that concerned several issues: (1) role of media as consumer information (2) role of government and non-government actors as normative agency (3) role of company in social responsibility (4) the ecofriendly consciousness as a value of the company. As we know that environmental norms that has been admitted internationally has changed the global industrial process. This environmental norms also pushed the companies around the world, especially the company in Sukoharjo, Central Java, Indonesia to follow the norm. The neglection toward the global norms will remained the company in isolated and unsustained market that will harm the continuity of the company. So, in buyer-driven industry, the characteristic of company-consumer relations has brought a fast dynamic evolution of norms and values. The creation of global norms and values is circulated by passing national territories or identities.Keywords: ecolabeling, waste management, CSR, normative power
Procedia PDF Downloads 306406 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 132405 A Critical Analysis of the Creation of Geoparks in Brazil: Challenges and Possibilities
Authors: Isabella Maria Beil
Abstract:
The International Geosciences and Geoparks Programme (IGGP) were officially created in 2015 by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to enhance the protection of the geological heritage and fill the gaps on the World Heritage Convention. According to UNESCO, a Global Geopark is an unified area where sites and landscapes of international geological significance are managed based on a concept of sustainable development. Tourism is seen as a main activity to develop new sources of revenue. Currently (November 2022), UNESCO recognized 177 Global Geoparks, of which more than 50% are in Europe, 40% in Asia, 6% in Latin America, and the remaining 4% are distributed between Africa and Anglo-Saxon America. This picture shows the existence of a much uneven geographical distribution of these areas across the planet. Currently, there are three Geoparks in Brazil; however, the first of them was accepted by the Global Geoparks Network in 2006 and, just fifteen years later, two other Brazilian Geoparks also obtained the UNESCO title. Therefore, this paper aims to provide an overview of the current geopark situation in Brazil and to identify the main challenges faced by the implementation of these areas in the country. To this end, the Brazilian history and its main characteristics regarding the development of geoparks over the years will be briefly presented. Then, the results obtained from interviews with those responsible for each of the current 29 aspiring geoparks in Brazil will be presented. Finally, the main challenges related to the implementation of Geoparks in the country will be listed. Among these challenges, the answers obtained through the interviews revealed conflicts and problems that pose hindrances both to the start of the development of a Geopark project and to its continuity and implementation. It is clear that the task of getting multiple social actors, or stakeholders, to engage with the Geopark, one of UNESCO’s guidelines, is one of its most complex aspects. Therefore, among the main challenges, stand out the difficulty of establishing solid partnerships, what directly reflects divergences between the different social actors and their goals. This difficulty in establishing partnerships happens for a number of reasons. One of them is that the investment in a Geopark project can be high and investors often expect a short-term financial return. In addition, political support from the public sector is often costly as well, since the possible results and positive influences of a Geopark in a given area will only be experienced during future mandates. These results demonstrate that the research on Geoparks goes far beyond the geological perspective linked to its origins, and is deeply embedded in political and economic issues.Keywords: Brazil, geoparks, tourism, UNESCO
Procedia PDF Downloads 90404 A Comparative Study between Japan and the European Union on Software Vulnerability Public Policies
Authors: Stefano Fantin
Abstract:
The present analysis outcomes from the research undertaken in the course of the European-funded project EUNITY, which targets the gaps in research and development on cybersecurity and privacy between Europe and Japan. Under these auspices, the research presents a study on the policy approach of Japan, the EU and a number of Member States of the Union with regard to the handling and discovery of software vulnerabilities, with the aim of identifying methodological differences and similarities. This research builds upon a functional comparative analysis of both public policies and legal instruments from the identified jurisdictions. The result of this analysis is based on semi-structured interviews with EUNITY partners, as well as by the participation of the researcher to a recent report from the Center for EU Policy Study on software vulnerability. The European Union presents a rather fragmented legal framework on software vulnerabilities. The presence of a number of different legislations at the EU level (including Network and Information Security Directive, Critical Infrastructure Directive, Directive on the Attacks at Information Systems and the Proposal for a Cybersecurity Act) with no clear focus on such a subject makes it difficult for both national governments and end-users (software owners, researchers and private citizens) to gain a clear understanding of the Union’s approach. Additionally, the current data protection reform package (general data protection regulation), seems to create legal uncertainty around security research. To date, at the member states level, a few efforts towards transparent practices have been made, namely by the Netherlands, France, and Latvia. This research will explain what policy approach such countries have taken. Japan has started implementing a coordinated vulnerability disclosure policy in 2004. To date, two amendments can be registered on the framework (2014 and 2017). The framework is furthermore complemented by a series of instruments allowing researchers to disclose responsibly any new discovery. However, the policy has started to lose its efficiency due to a significant increase in reports made to the authority in charge. To conclude, the research conducted reveals two asymmetric policy approaches, time-wise and content-wise. The analysis therein will, therefore, conclude with a series of policy recommendations based on the lessons learned from both regions, towards a common approach to the security of European and Japanese markets, industries and citizens.Keywords: cybersecurity, vulnerability, European Union, Japan
Procedia PDF Downloads 156403 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding
Authors: Amir E. Amirzadeh, Richard K. Strand
Abstract:
Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making
Procedia PDF Downloads 70402 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat
Authors: Bader Eddin Al Asali, N. Srinivasa Reddy
Abstract:
Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters
Procedia PDF Downloads 270401 Angiomotin Regulates Integrin Beta 1-Mediated Endothelial Cell Migration and Angiogenesis
Authors: Yuanyuan Zhang, Yujuan Zheng, Giuseppina Barutello, Sumako Kameishi, Kungchun Chiu, Katharina Hennig, Martial Balland, Federica Cavallo, Lars Holmgren
Abstract:
Angiogenesis describes that new blood vessels migrate from pre-existing ones to form 3D lumenized structure and remodeling. During directional migration toward the gradient of pro-angiogenic factors, the endothelial cells, especially the tip cells need filopodia to sense the environment and exert the pulling force. Of particular interest are the integrin proteins, which play an essential role in focal adhesion in the connection between migrating cells and extracellular matrix (ECM). Understanding how these biomechanical complexes orchestrate intrinsic and extrinsic forces is important for our understanding of the underlying mechanisms driving angiogenesis. We have previously identified Angiomotin (Amot), a member of Amot scaffold protein family, as a promoter for endothelial cell migration in vitro and zebrafish models. Hence, we established inducible endothelial-specific Amot knock-out mice to study normal retinal angiogenesis as well as tumor angiogenesis. We found that the migration ratio of the blood vessel network to the edge was significantly decreased in Amotec- retinas at postnatal day 6 (P6). While almost all the Amot defect tip cells lost migration advantages at P7. In consistence with the dramatic morphology defect of tip cells, there was a non-autonomous defect in astrocytes, as well as the disorganized fibronectin expression pattern correspondingly in migration front. Furthermore, the growth of transplanted LLC tumor was inhibited in Amot knockout mice due to fewer vasculature involved. By using MMTV-PyMT transgenic mouse model, there was a significantly longer period before tumors arised when Amot was specifically knocked out in blood vessels. In vitro evidence showed that Amot binded to beta-actin, Integrin beta 1 (ITGB1), Fibronectin, FAK, Vinculin, major focal adhesion molecules, and ITGB1 and stress fibers were distinctly induced by Amot transfection. Via traction force microscopy, the total energy (force indicater) was found significantly decreased in Amot knockdown cells. Taken together, we propose that Amot is a novel partner of the ITGB1/Fibronectin protein complex at focal adhesion and required for exerting force transition between endothelial cell and extracellular matrix.Keywords: angiogenesis, angiomotin, endothelial cell migration, focal adhesion, integrin beta 1
Procedia PDF Downloads 238