Search results for: clustering approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14303

Search results for: clustering approach

1373 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 127
1372 Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, AUV, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 91
1371 The Performance Evaluation of the Modular Design of Hybrid Wall with Surface Heating and Cooling System

Authors: Selcen Nur Eri̇kci̇ Çeli̇k, Burcu İbaş Parlakyildiz, Gülay Zorer Gedi̇k

Abstract:

Reducing the use of mechanical heating and cooling systems in buildings, which accounts for approximately 30-40% of total energy consumption in the world has a major impact in terms of energy conservation. Formations of buildings that have sustainable and low energy utilization, structural elements with mechanical systems should be evaluated with a holistic approach. In point of reduction of building energy consumption ratio, wall elements that are vertical building elements and have an area broadly (m2) have proposed as a regulation with a different system. In the study, designing surface heating and cooling energy with a hybrid type of modular wall system and the integration of building elements will be evaluated. The design of wall element; - Identification of certain standards in terms of architectural design and size, -Elaboration according to the area where the wall elements (interior walls, exterior walls) -Solution of the joints, -Obtaining the surface in terms of building compatible with both conceptual structural put emphasis on upper stages, these elements will be formed. The durability of the product to the various forces, stability and resistance are so much substantial that are used the establishment of ready-wall element section and the planning of structural design. All created ready-wall alternatives will be paid attention at some parameters; such as adapting to performance-cost by optimum level and size that can be easily processed and reached. The restrictions such as the size of the zoning regulations, building function, structural system, wheelbase that are imposed by building laws, should be evaluated. The building aims to intend to function according to a certain standardization system and construction of wall elements will be used. The scope of performance criteria determined on the wall elements, utilization (operation, maintenance) and renovation phase, alternative material options will be evaluated with interim materials located in the contents. Design, implementation and technical combination of modular wall elements in the use phase and installation details together with the integration of energy saving, heat-saving and useful effects on the environmental aspects will be discussed in detail. As a result, the ready-wall product with surface heating and cooling modules will be created and defined as hybrid wall and will be compared with the conventional system in terms of thermal comfort. After preliminary architectural evaluations, certain decisions for all architectural design processes (pre and post design) such as the implementation and performance in use, maintenance, renewal will be evaluated in the results.

Keywords: modular ready-wall element, hybrid, architectural design, thermal comfort, energy saving

Procedia PDF Downloads 254
1370 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 386
1369 Relationship between Structure of Some Nitroaromatic Pollutants and Their Degradation Kinetic Parameters in UV-VIS/TIO2 System

Authors: I. Nitoi, P. Oancea, M. Raileanu, M. Crisan, L. Constantin, I. Cristea

Abstract:

Hazardous organic compounds like nitroaromatics are frequently found in chemical and petroleum industries discharged effluents. Due to their bio-refractory character and high chemical stability cannot be efficiently removed by classical biological or physical-chemical treatment processes. In the past decades, semiconductor photocatalysis has been frequently applied for the advanced degradation of toxic pollutants. Among various semiconductors titania was a widely studied photocatalyst, due to its chemical inertness, low cost, photostability and nontoxicity. In order to improve optical absorption and photocatalytic activity of TiO2 many attempts have been made, one feasible approach consists of doping oxide semiconductor with metal. The degradation of dinitrobenzene (DNB) and dinitrotoluene (DNT) from aqueous solution under UVA-VIS irradiation using heavy metal (0.5% Fe, 1%Co, 1%Ni ) doped titania was investigated. The photodegradation experiments were carried out using a Heraeus laboratory scale UV-VIS reactor equipped with a medium-pressure mercury lamp which emits in the range: 320-500 nm. Solutions with (0.34-3.14) x 10-4 M pollutant content were photo-oxidized in the following working conditions: pH = 5-9; photocatalyst dose = 200 mg/L; irradiation time = 30 – 240 minutes. Prior to irradiation, the photocatalyst powder was added to the samples, and solutions were bubbled with air (50 L/hour), in the dark, for 30 min. Dopant type, pH, structure and initial pollutant concentration influence on the degradation efficiency were evaluated in order to set up the optimal working conditions which assure substrate advanced degradation. The kinetics of nitroaromatics degradation and organic nitrogen mineralization was assessed and pseudo-first order rate constants were calculated. Fe doped photocatalyst with lowest metal content (0.5 wt.%) showed a considerable better behaviour in respect to pollutant degradation than Co and Ni (1wt.%) doped titania catalysts. For the same working conditions, degradation efficiency was higher for DNT than DNB in accordance with their calculated adsobance constants (Kad), taking into account that degradation process occurs on catalyst surface following a Langmuir-Hinshalwood model. The presence of methyl group in the structure of DNT allows its degradation by oxidative and reductive pathways, while DNB is converted only by reductive route, which also explain the highest DNT degradation efficiency. For highest pollutant concentration tested (3 x 10-4 M), optimum working conditions (0.5 wt.% Fe doped –TiO2 loading of 200 mg/L, pH=7 and 240 min. irradiation time) assures advanced nitroaromatics degradation (ηDNB=89%, ηDNT=94%) and organic nitrogen mineralization (ηDNB=44%, ηDNT=47%).

Keywords: hazardous organic compounds, irradiation, nitroaromatics, photocatalysis

Procedia PDF Downloads 317
1368 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 77
1367 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 352
1366 Evaluation of Low-Global Warming Potential Refrigerants in Vapor Compression Heat Pumps

Authors: Hamed Jafargholi

Abstract:

Global warming presents an immense environmental risk, causing detrimental impacts on ecological systems and putting coastal areas at risk. Implementing efficient measures to minimize greenhouse gas emissions and the use of fossil fuels is essential to reducing global warming. Vapor compression heat pumps provide a practical method for harnessing energy from waste heat sources and reducing energy consumption. However, traditional working fluids used in these heat pumps generally contain a significant global warming potential (GWP), which might cause severe greenhouse effects if they are released. The goal of the emphasis on low-GWP (below 150) refrigerants is to further the vapor compression heat pumps. A classification system for vapor compression heat pumps is offered, with different boundaries based on the needed heat temperature and advancements in heat pump technology. A heat pump could be classified as a low temperature heat pump (LTHP), medium temperature heat pump (MTHP), high temperature heat pump (HTHP), or ultra-high temperature heat pump (UHTHP). The HTHP/UHTHP border is 160 °C, the MTHP/HTHP and LTHP/MTHP limits are 100 and 60 °C, respectively. The refrigerant is one of the most important parts of a vapor compression heat pump system. Presently, the main ways to choose a refrigerant are based on ozone depletion potential (ODP) and GWP, with GWP being the lowest possible value and ODP being zero. Pure low-GWP refrigerants, such as natural refrigerants (R718 and R744), hydrocarbons (R290, R600), hydrofluorocarbons (R152a and R161), hydrofluoroolefins (R1234yf, R1234ze(E)), and hydrochlorofluoroolefin (R1233zd(E)), were selected as candidates for vapor compression heat pump systems based on these selection principles. The performance, characteristics, and potential uses of these low-GWP refrigerants in heat pump systems are investigated in this paper. As vapor compression heat pumps with pure low-GWP refrigerants become more common, more and more low-grade heat can be recovered. This means that energy consumption would decrease. The research outputs showed that the refrigerants R718 for UHTHP application, R1233zd(E) for HTHP application, R600, R152a, R161, R1234ze(E) for MTHP, and R744, R290, and R1234yf for LTHP application are appropriate. The selection of an appropriate refrigerant should, in fact, take into consideration two different environmental and thermodynamic points of view. It might be argued that, depending on the situation, a trade-off between these two groups should constantly be considered. The environmental approach is now far stronger than it was previously, according to the European Union regulations. This will promote sustainable energy consumption and social development in addition to assisting in the reduction of greenhouse gas emissions and the management of global warming.

Keywords: vapor compression, global warming potential, heat pumps, greenhouse

Procedia PDF Downloads 33
1365 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 242
1364 A Proposal for an Excessivist Social Welfare Ordering

Authors: V. De Sandi

Abstract:

In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.

Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering

Procedia PDF Downloads 63
1363 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro

Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich

Abstract:

Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.

Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve

Procedia PDF Downloads 150
1362 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies

Authors: Praniil Nagaraj

Abstract:

This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.

Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis

Procedia PDF Downloads 44
1361 Health-Related Quality of Life of Caregivers of Institution-Reared Children in Metro Manila: Effects of Role Overload and Role Distress

Authors: Ian Christopher Rocha

Abstract:

This study aimed to determine the association of the quality of life (QOL) of the caregivers of children in need of special protection (CNSP) in child-caring institutions in Metro Manila with the levels of their role overload (RO) and role distress (RD). The CNSP in this study covered the orphaned, abandoned, abused, neglected, exploited, and mentally-challenged children. In this study, the domains of QOL included physical health (PH), psychological health, social health (SH), and living conditions (LC). It also intended to ascertain the association of their personal and work-related characteristics with their RO and RD levels. The respondents of this study were 130 CNSP caregivers in 17 residential child-rearing institutions in Metro Manila. A purposive non-probability sampling was used. Using a quantitative methodological approach, the survey method was utilized to gather data with the use of a self-administered structured questionnaire. Data were analyzed using both descriptive and inferential statistics. Results revealed that the level of RO, the level of RD, and the QOL of the CNSP caregivers were all moderate. Data also suggested that there were significant positive relationships between the RO level and the caregivers’ characteristics, such as age, the number of training, and years of service in the institution. At the same time, the findings revealed that there were significant positive relationships between the RD level and the caregivers’ characteristics, such as age and hours of care rendered to their care recipients. In addition, the findings suggested that all domains of their QOL obtained significant relationships with their RO level. For the correlations of their level of RO and their QOL domains, the PH and the LC obtained a moderate negative correlation with the RO level while the rest of the domains obtained weak negative correlations with RO level. For the correlations of their level of RD and the QOL domains, all domains, except SH, obtained strong negative correlations with the level of RD. The SH revealed to have a moderate negative correlation with RD level. In conclusion, caregivers who are older experience higher levels of RO and RD; caregivers who have more training and years of service experience the higher level of RO; and caregivers who have longer hours of rendered care experience the higher level of RD. In addition, the study affirmed that if the levels of RO and RD are high, the QOL is low, and vice versa. Therefore, the RO and RD levels are reliable predictors of the caregivers’ QOL. In relation, the caregiving situation in the Philippines revealed to be unique and distinct from other countries because the levels of RO and RD and the QOL of Filipino CNSP caregivers were all moderate in contrast with their foreign counterparts who experience high caregiving RO and RD leading to low QOL.

Keywords: quality of life, caregivers, children in need of special protection, physical health, psychological health, social health, living conditions, role overload, role distress

Procedia PDF Downloads 211
1360 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.

Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies

Procedia PDF Downloads 147
1359 Mental Health Monitoring System as an Effort for Prevention and Handling of Psychological Problems in Students

Authors: Arif Tri Setyanto, Aditya Nanda Priyatama, Nugraha Arif Karyanta, Fadjri Kirana A., Afia Fitriani, Rini Setyowati, Moh.Abdul Hakim

Abstract:

The Basic Health Research Report by the Ministry of Health (2018) shows an increase in the prevalence of mental health disorders in the adolescent and early adult age ranges. Supporting this finding, data on the psychological examination of the student health service unit at one State University recorded 115 cases of moderate and severe health problems in the period 2016 - 2019. More specifically, the highest number of cases was experienced by clients in the age range of 21-23 years or equivalent, with the mid-semester stage towards the end. Based on the distribution of cases experienced and the disorder becomes a psychological problem experienced by students. A total of 29% or the equivalent of 33 students experienced anxiety disorders, 25% or 29 students experienced problems ranging from mild to severe, as well as other classifications of disorders experienced, including adjustment disorders, family problems, academics, mood disorders, self-concept disorders, personality disorders, cognitive disorders, and others such as trauma and sexual disorders. Various mental health disorders have a significant impact on the academic life of students, such as low GPA, exceeding the limit in college, dropping out, disruption of social life on campus, to suicide. Based on literature reviews and best practices from universities in various countries, one of the effective ways to prevent and treat student mental health disorders is to implement a mental health monitoring system in universities. This study uses a participatory action research approach, with a sample of 423 from a total population of 32,112 students. The scale used in this study is the Beck Depression Inventory (BDI) to measure depression and the Taylor Minnesota Anxiety Scale (TMAS) to measure anxiety levels. This study aims to (1) develop a digital-based health monitoring system for students' mental health situations in the mental health category. , dangers, or those who have mental disorders, especially indications of symptoms of depression and anxiety disorders, and (2) implementing a mental health monitoring system in universities at the beginning and end of each semester. The results of the analysis show that from 423 respondents, the main problems faced by all coursework, such as thesis and academic assignments. Based on the scoring and categorization of the Beck Depression Inventory (BDI), 191 students experienced symptoms of depression. A total of 24.35%, or 103 students experienced mild depression, 14.42% (61 students) had moderate depression, and 6.38% (27 students) experienced severe or extreme depression. Furthermore, as many as 80.38% (340 students) experienced anxiety in the high category. This article will review this review of the student mental health service system on campus.

Keywords: monitoring system, mental health, psychological problems, students

Procedia PDF Downloads 111
1358 A Preliminary in vitro Investigation of the Acetylcholinesterase and α-Amylase Inhibition Potential of Pomegranate Peel Extracts

Authors: Zoi Konsoula

Abstract:

The increasing prevalence of Alzheimer’s disease (AD) and diabetes mellitus (DM) constitutes them major global health problems. Recently, the inhibition of key enzyme activity is considered a potential treatment of both diseases. Specifically, inhibition of acetylcholinesterase (AChE), the key enzyme involved in the breakdown of the neurotransmitter acetylcholine, is a promising approach for the treatment of AD, while inhibition of α-amylase retards the hydrolysis of carbohydrates and, thus, reduces hyperglycemia. Unfortunately, commercially available AChE and α-amylase inhibitors are reported to possess side effects. Consequently, there is a need to develop safe and effective treatments for both diseases. In the present study, pomegranate peel (PP) was extracted using various solvents of increasing polarity, while two extraction methods were employed, the conventional maceration and the ultrasound assisted extraction (UAE). The concentration of bioactive phytoconstituents, such as total phenolics (TPC) and total flavonoids (TFC) in the prepared extracts was evaluated by the Folin-Ciocalteu and the aluminum-flavonoid complex method, respectively. Furthermore, the anti-neurodegenerative and anti-hyperglycemic activity of all extracts was determined using AChE and α-amylase inhibitory activity assays, respectively. The inhibitory activity of the extracts against AChE and α-amylase was characterized by estimating their IC₅₀ value using a dose-response curve, while galanthamine and acarbose were used as positive controls, respectively. Finally, the kinetics of AChE and α-amylase in the presence of the most inhibitory potent extracts was determined by the Lineweaver-Burk plot. The methanolic extract prepared using the UAE contained the highest amount of phytoconstituents, followed by the respective ethanolic extract. All extracts inhibited acetylcholinesterase in a dose-dependent manner, while the increased anticholinesterase activity of the methanolic (IC₅₀ = 32 μg/mL) and ethanolic (IC₅₀ = 42 μg/mL) extract was positively correlated with their TPC content. Furthermore, the activity of the aforementioned extracts was comparable to galanthamine. Similar results were obtained in the case of α-amylase, however, all extracts showed lower inhibitory effect on the carbohydrate hydrolyzing enzyme than on AChE, since the IC₅₀ value ranged from 84 to 100 μg/mL. Also, the α-amylase inhibitory effect of the extracts was lower than acarbose. Finally, the methanolic and ethanolic extracts prepared by UAE inhibited both enzymes in a mixed (competitive/noncompetitive) manner since the Kₘ value of both enzymes increased in the presence of extracts, while the Vmax value decreased. The results of the present study indicate that PP may be a useful source of active compounds for the management of AD and DM. Moreover, taking into consideration that PP is an agro-industrial waste product, its valorization could not only result in economic efficiency but also reduce the environmental pollution.

Keywords: acetylcholinesterase, Alzheimer’s disease, α-amylase, diabetes mellitus, pomegranate

Procedia PDF Downloads 122
1357 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 167
1356 Light-Controlled Gene Expression in Yeast

Authors: Peter. M. Kusen, Georg Wandrey, Christopher Probst, Dietrich Kohlheyer, Jochen Buchs, Jorg Pietruszkau

Abstract:

Light as a stimulus provides the capability to develop regulation techniques for customizable gene expression. A great advantage is the extremely flexible and accurate dosing that can be performed in a non invasive and sterile manner even for high throughput technologies. Therefore, light regulation in a multiwell microbioreactor system was realized providing the opportunity to control gene expression with outstanding complexity. A light-regulated gene expression system in Saccharomyces cerevisiae was designed applying the strategy of caged compounds. These compounds are photo-labile protected and therefore biologically inactive regulator molecules which can be reactivated by irradiation with certain light conditions. The “caging” of a repressor molecule which is consumed after deprotection was essential to create a flexible expression system. Thereby, gene expression could be temporally repressed by irradiation and subsequent release of the active repressor molecule. Afterwards, the repressor molecule is consumed by the yeast cells leading to reactivation of gene expression. A yeast strain harboring a construct with the corresponding repressible promoter in combination with a fluorescent marker protein was applied in a Photo-BioLector platform which allows individual irradiation as well as online fluorescence and growth detection. This device was used to precisely control the repression duration by adjusting the amount of released repressor via different irradiation times. With the presented screening platform the regulation of complex expression procedures was achieved by combination of several repression/derepression intervals. In particular, a stepwise increase of temporally-constant expression levels was demonstrated which could be used to study concentration dependent effects on cell functions. Also linear expression rates with variable slopes could be shown representing a possible solution for challenging protein productions, whereby excessive production rates lead to misfolding or intoxication. Finally, the very flexible regulation enabled accurate control over the expression induction, although we used a repressible promoter. Summing up, the continuous online regulation of gene expression has the potential to synchronize gene expression levels to optimize metabolic flux, artificial enzyme cascades, growth rates for co cultivations and many other applications addicted to complex expression regulation. The developed light-regulated expression platform represents an innovative screening approach to find optimization potential for production processes.

Keywords: caged-compounds, gene expression regulation, optogenetics, photo-labile protecting group

Procedia PDF Downloads 326
1355 The Communication of Audit Report: Key Audit Matters in United Kingdom

Authors: L. Sierra, N. Gambetta, M. A. Garcia-Benau, M. Orta

Abstract:

Financial scandals and financial crisis have led to an international debate on the value of auditing. In recent years there have been significant legislative reforms aiming to increase markets’ confidence in audit services. In particular, there has been a significant debate on the need to improve the communication of auditors with audit reports users as a way to improve its informative value and thus, to improve audit quality. The International Auditing and Assurance Standards Board (IAASB) has proposed changes to the audit report standards. The International Standard on Auditing 701, Communicating Key Audit Matters (KAM) in the Independent Auditor's Report, has introduced new concepts that go beyond the auditor's opinion and requires to disclose the risks that, from the auditor's point of view, are more significant in the audited company information. Focusing on the companies included in the Financial Times Stock Exchange 100 index, this study aims to focus on the analysis of the determinants of the number of KAM disclosed by the auditor in the audit report and moreover, the analysis of the determinants of the different type of KAM reported during the period 2013-2015. To test the hypotheses in the empirical research, two different models have been used. The first one is a linear regression model to identify the client’s characteristics, industry sector and auditor’s characteristics that are related to the number of KAM disclosed in the audit report. Secondly, a logistic regression model is used to identify the determinants of the number of each KAM type disclosed in the audit report; in line with the risk-based approach to auditing financial statements, we categorized the KAM in 2 groups: Entity-level KAM and Accounting-level KAM. Regarding the auditor’s characteristics impact on the KAM disclosure, the results show that PwC tends to report a larger number of KAM while KPMG tends to report less KAM in the audit report. Further, PwC reports a larger number of entity-level risk KAM while KPMG reports less account-level risk KAM. The results also show that companies paying higher fees tend to have more entity-level risk KAM and less account-level risk KAM. The materiality level is positively related to the number of account-level risk KAM. Additionally, these study results show that the relationship between client’s characteristics and number of KAM is more evident in account-level risk KAM than in entity-level risk KAM. A highly leveraged company carries a great deal of risk, but due to this, they are usually subject to strong capital providers monitoring resulting in less account-level risk KAM. The results reveal that the number of account-level risk KAM is strongly related to the industry sector in which the company operates assets. This study helps to understand the UK audit market, provides information to auditors and finally, it opens new research avenues in the academia.

Keywords: FTSE 100, IAS 701, key audit matters, auditor’s characteristics, client’s characteristics

Procedia PDF Downloads 231
1354 Development of an Appropriate Method for the Determination of Multiple Mycotoxins in Pork Processing Products by UHPLC-TCFLD

Authors: Jason Gica, Yi-Hsieng Samuel Wu, Deng-Jye Yang, Yi-Chen Chen

Abstract:

Mycotoxins, harmful secondary metabolites produced by certain fungi species, pose significant risks to animals and humans worldwide. Their stable properties lead to contamination during grain harvesting, transportation, and storage, as well as in processed food products. The prevalence of mycotoxin contamination has attracted significant attention due to its adverse impact on food safety and global trade. The secondary contamination pathway from animal products has been identified as an important route of exposure, posing health risks for livestock and humans consuming contaminated products. Pork, one of the highly consumed meat products in Taiwan according to the National Food Consumption Database, plays a critical role in the nation's diet and economy. Given its substantial consumption, pork processing products are a significant component of the food supply chain and a potential source of mycotoxin contamination. This study is paramount for formulating effective regulations and strategies to mitigate mycotoxin-related risks in the food supply chain. By establishing a reliable analytical method, this research contributes to safeguarding public health and enhancing the quality of pork processing products. The findings will serve as valuable guidance for policymakers, food industries, and consumers to ensure a safer food supply chain in the face of emerging mycotoxin challenges. An innovative and efficient analytical approach is proposed using Ultra-High Performance Liquid Chromatography coupled with Temperature Control Fluorescence Detector Light (UHPLC-TCFLD) to determine multiple mycotoxins in pork meat samples due to its exceptional capacity to detect multiple mycotoxins at the lowest levels of concentration, making it highly sensitive and reliable for comprehensive mycotoxin analysis. Additionally, its ability to simultaneously detect multiple mycotoxins in a single run significantly reduces the time and resources required for analysis, making it a cost-effective solution for monitoring mycotoxin contamination in pork processing products. The research aims to optimize the efficient mycotoxin QuEChERs extraction method and rigorously validate its accuracy and precision. The results will provide crucial insights into mycotoxin levels in pork processing products.

Keywords: multiple-mycotoxin analysis, pork processing products, QuEChERs, UHPLC-TCFLD, validation

Procedia PDF Downloads 69
1353 The Effect of Data Integration to the Smart City

Authors: Richard Byrne, Emma Mulliner

Abstract:

Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.

Keywords: data, planning, policy development, smart cities

Procedia PDF Downloads 310
1352 Effect of Chronic Exposure to Diazinon on Glucose Homeostasis and Oxidative Stress in Pancreas of Rats and the Potential Role of Mesna in Ameliorating This Effect

Authors: Azza El-Medany, Jamila El-Medany

Abstract:

Residential and agricultural pesticide use is widespread in the world. Their extensive and indiscriminative use, in addition with their ability to interact with biological systems other than their primary targets constitute a health hazards to both humans and animals. The toxic effects of pesticides include alterations in metabolism; there is a lack of knowledge that organophosphates can cause pancreatic toxicity. The primary goal of this work is to study the effects of chronic exposure to Diazinon an organophosphate used in agriculture on pancreatic tissues and evaluate the ameliorating effect of Mesna as antioxidant on the toxicity of Diazinon on pancreatic tissues.40 adult male rats, their weight ranged between 300-350 g. The rats were classified into three groups; control (10 rats) was received corn oil at a dose of 1 0 mg/kg/day by gavage once a day for 2 months. Diazinon (15 rats) was received Diazinon at a dose of 10 mg/kg/day dissolved in corn oil by gavage once a day for 2 months. Treated group (15 rats), were received Mesna 180mg/kg once a week by gavage 15 minutes before administration of Diazinon for 2 months. At the end of the experiment, animals were anesthetized, blood samples were taken by cardiac puncture for glucose and insulin assays and pancreas was removed and divided into 3 portions; first portion for histopathological study; second portion for ultrastructural study; third portion for biochemical study using Elisa Kits including determination of malondialdehyde (MDA), tumor necrosis factor α (TNF-α), myeloperoxidase activity (MPO), interleukin 1β (IL-1β). A significant increase in the levels of MDA, TNF-α, MPO activity, IL-1β, serum glucose levels in the toxicated group with Diazinon were observed, while a significant reduction was noticed in GSH in serum insulin levels. After treatment with Mesna a significant reduction was observed in the previously mentioned parameters except that there was a significant rise in GSH in insulin levels. Histopathological and ultra-structural studies showed destruction in pancreatic tissues and β cells were the most affected cells among the injured islets as compared with the control group. The current study try to spot light about the effects of chronic exposure to pesticides on vital organs as pancreas also the role of oxidative stress that may be induced by them in evoking their toxicity. This study shows the role of antioxidant drugs in ameliorating or preventing the toxicity. This appears to be a promising approach that may be considered as a complementary treatment of pesticide toxicity.

Keywords: Diazinon, reduced glutathione, myeloperoxidase activity, tumor necrosis factor α, Mesna

Procedia PDF Downloads 242
1351 Cross-Sectional Analysis of the Health Product E-Commerce Market in Singapore

Authors: Andrew Green, Jiaming Liu, Kellathur Srinivasan, Raymond Chua

Abstract:

Introduction: The size of Singapore’s online health product (HP) market (e-commerce) is largely unknown. However, it is recognized that a large majority comes from overseas and thus, unregulated. As buying HP from unauthorized sources significantly compromises public health safety, understanding e-commerce users’ demographics and their perceptions on online HP purchasing becomes a pivotal first step to form a basis for recommendations in Singapore’s pharmacovigilance efforts. Objective: To assess the prevalence of online HP purchasing behaviour among Singaporean e-commerce users. Methodology: This is a cross-sectional study targeting Singaporean e-commerce users recruited from various local websites and online forums. Participants were not randomized into study arms but instead stratified by random sampling method based on participants’ age. A self-administered anonymous questionnaire was used to explore participants' demographics, online HP purchasing behaviour, knowledge and attitude. The association of different variables with online HP purchasing behaviour was analysed using logistic regression statistics. Main outcome measures: Prevalence of HP e-commerce users in Singapore (%) and variables that contribute to the prevalence (adjusted prevalent ratio). Results: The study recruited 372 complete and valid responses. The prevalence of online HP consumers among e-commerce users in Singapore is estimated to be 55.9% (1.7 million consumers). Online purchasing of complementary HP (46.9%) was the most prevalent, followed by medical devices (21.6%) and Western medicine (20.5%). Multivariate analysis showed that age is an independent variable that correlates with the likelihood of buying HP online. The prevalence of HP e-commerce users is highest in the 35-44 age group (64.1%) and lowest among the 16-24 age group (36.4%). The most bought HP through the internet are vitamins and minerals (21.5%), non-herbal (15.9%), herbal (13.9%), weight loss (8.7%) and sports (8.4%) supplements. While the top 3 products are distributed equally between the genders, there is a skew towards female respondents (12.4% in females vs. 4.9% in males) for weight loss supplements and towards males (13.2% in males vs. 3.7% in females) for sports supplements. Even though online consumers are in the younger age brackets, our study found that up to 72.0% of HP bought online are bought for others (buyer’s family and/or friends). Multivariate analysis showed a statistically significant association between purchasing HP through online means and the perceptions that 'internet is safe' (adjusted Prevalence Ratio=1.15, CI 1.03-1.28), 'buying HP online is time saving' (PR=1.17, CI 1.01-1.36), and 'recognition of HP brand' (PR=1.21 CI 1.06-1.40). Conclusions: This study has provided prevalence data for online HP market in Singapore, and has allowed the country’s regulatory body to formulate a targeted pharmacovigilance approach to this growing problem.

Keywords: e-commerce, pharmaceuticals, pharmacovigilance, Singapore

Procedia PDF Downloads 363
1350 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers

Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello

Abstract:

The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.

Keywords: assessment, computer-based, early identification, language-related skills

Procedia PDF Downloads 183
1349 Optimization of the Administration of Intravenous Medication by Reduction of the Residual Volume, Taking User-Friendliness, Cost Efficiency, and Safety into Account

Authors: A. Poukens, I. Sluyts, A. Krings, J. Swartenbroekx, D. Geeroms, J. Poukens

Abstract:

Introduction and Objectives: It has been known for many years that with the administration of intravenous medication, a rather significant part of the planned to be administered infusion solution, the residual volume ( the volume that remains in the IV line and or infusion bag), does not reach the patient and is wasted. This could possibly result in under dosage and diminished therapeutic effect. Despite the important impact on the patient, the reduction of residual volume lacks attention. An optimized and clearly stated protocol concerning the reduction of residual volume in an IV line is necessary for each hospital. As described in my Master’s thesis, acquiring the degree of Master in Hospital Pharmacy, administration of intravenous medication can be optimized by reduction of the residual volume. Herewith effectiveness, user-friendliness, cost efficiency and safety were taken into account. Material and Methods: By usage of a literature study and an online questionnaire sent out to all Flemish hospitals and hospitals in the Netherlands (province Limburg), current flush methods could be mapped out. In laboratory research, possible flush methods aiming to reduce the residual volume were measured. Furthermore, a self-developed experimental method to reduce the residual volume was added to the study. The current flush methods and the self-developed experimental method were compared to each other based on cost efficiency, user-friendliness and safety. Results: There is a major difference between the Flemish and the hospitals in the Netherlands (Province Limburg) concerning the approach and method of flushing IV lines after administration of intravenous medication. The residual volumes were measured and laboratory research showed that if flushing was done minimally 1-time equivalent to the residual volume, 95 percent of glucose would be flushed through. Based on the comparison, it became clear that flushing by use of a pre-filled syringe would be the most cost-efficient, user-friendly and safest method. According to laboratory research, the self-developed experimental method is feasible and has the advantage that the remaining fraction of the medication can be administered to the patient in unchanged concentration without dilution. Furthermore, this technique can be applied regardless of the level of the residual volume. Conclusion and Recommendations: It is recommendable to revise the current infusion systems and flushing methods in most hospitals. Aside from education of the hospital staff and alignment on a uniform substantiated protocol, an optimized and clear policy on the reduction of residual volume is necessary for each hospital. It is recommended to flush all IV lines with rinsing fluid with at least the equivalent volume of the residual volume. Further laboratory and clinical research for the self-developed experimental method are needed before this method can be implemented clinically in a broader setting.

Keywords: intravenous medication, infusion therapy, IV flushing, residual volume

Procedia PDF Downloads 135
1348 An Analysis of Possible Implications of Patent Term Extension in Pharmaceutical Sector on Indian Consumers

Authors: Anandkumar Rshindhe

Abstract:

Patents are considered as good monopoly in India. It is a mechanism by which the inventor is encouraged to do invention and also to make available to the society at large with a new useful technology. Patent system does not provide any protection to the invention itself but to the claims (rights) which the patentee has identified in relation to his invention. Thus the patentee is granted monopoly to the extent of his recognition of his own rights in the form of utilities and all other utilities of invention are for the public. Thus we find both benefit to the inventor and the public at large that is the ultimate consumer. But developing any such technology is not free of cost. Inventors do a lot of investment in the coming out with a new technologies. One such example if of Pharmaceutical industries. These pharmaceutical Industries do lot of research and invest lot of money, time and labour in coming out with these invention. Once invention is done or process identified, in order to protect it, inventors approach Patent system to protect their rights in the form of claim over invention. The patent system takes its own time in giving recognition to the invention as patent. Even after the grant of patent the pharmaceutical companies need to comply with many other legal formalities to launch it as a drug (medicine) in market. Thus major portion in patent term is unproductive to patentee and whatever limited period the patentee gets would be not sufficient to recover the cost involved in invention and as a result price of patented product is raised very much, just to recover the cost of invent. This is ultimately a burden on consumer who is paying more only because the legislature has failed to provide for the delay and loss caused to patentee. This problem can be effectively remedied if Patent Term extension is done. Due to patent term extension, the inventor gets some more time in recovering the cost of invention. Thus the end product is much more cheaper compared to non patent term extension.The basic question here arises is that when the patent period granted to a patentee is only 20 years and out of which a major portion is spent in complying with necessary legal formalities before making the medicine available in market, does the company with the limited period of monopoly recover its investment made for doing research. Further the Indian patent Act has certain provisions making it mandatory on the part of patentee to make its patented invention at reasonable affordable price in India. In the light of above questions whether extending the term of patent would be a proper solution and a necessary requirement to protect the interest of patentee as well as the ultimate consumer. The basic objective of this paper would be to check the implications of Extending the Patent term on Indian Consumers. Whether it provides the benefits to the patentee, consumer or a hardship to the Generic industry and consumer.

Keywords: patent term extention, consumer interest, generic drug industry, pharmaceutical industries

Procedia PDF Downloads 451
1347 Effects of the Natural Compound on SARS-CoV-2 Spike Protein-Mediated Metabolic Alteration in THP-1 Cells Explored by the ¹H-NMR-Based Metabolomics Approach

Authors: Gyaltsen Dakpa, K. J. Senthil Kumar, Nai-Wen Tsao, Sheng-Yang Wang

Abstract:

Context: Coronavirus disease 2019 (COVID-19) is a severe respiratory illness caused by the SARS-CoV-2 virus. One of the hallmarks of COVID-19 is a change in metabolism, which can lead to increased severity and mortality. The mechanism of SARS-CoV-2-mediated perturbations of metabolic pathways has yet to be fully understood. Research Aim: This study aimed to investigate the metabolic alteration caused by SARS-CoV-2 spike protein in Phorbol 12-myristate 13-acetate (PMA)-induced human monocytes (THP-1) and to examine the regulatory effect of natural compounds like Antcins A on SARS-CoV-2 spike protein-induced metabolic alteration. Methodology: The study used a combination of proton nuclear magnetic resonance (1H-NMR) and MetaboAnalyst 5.0 software. THP-1 cells were treated with SARS-CoV-2 spike protein or control, and the metabolomic profiles of the cells were compared. Antcin A was also added to the cells to assess its regulatory effect on SARS-CoV-2 spike protein-induced metabolic alteration. Findings: The study results showed that treatment with SARS-CoV-2 spike protein significantly altered the metabolomic profiles of THP-1 cells. Eight metabolites, including glycerol-phosphocholine, glycine, canadine, sarcosine, phosphoenolpyruvic acid, glutamine, glutamate, and N, N-dimethylglycine, were significantly different between control and spike-protein treatment groups. Antcin A significantly reversed the changes in these metabolites. In addition, treatment with antacid A significantly inhibited SARS-CoV-2 spike protein-mediated up-regulation of TLR-4 and ACE2 receptors. Theoretical Importance The findings of this study suggest that SARS-CoV-2 spike protein can cause significant metabolic alterations in THP-1 cells. Antcin A, a natural compound, has the potential to reverse these metabolic alterations and may be a potential candidate for developing preventive or therapeutic agents for COVID-19. Data Collection: The data for this study was collected from THP-1 cells that were treated with SARS-CoV-2 spike protein or a control. The metabolomic profiles of the cells were then compared using 1H-NMR and MetaboAnalyst 5.0 software. Analysis Procedures: The metabolomic profiles of the THP-1 cells were analyzed using 1H-NMR and MetaboAnalyst 5.0 software. The software was used to identify and quantify the cells' metabolites and compare the control and spike-protein treatment groups. Questions Addressed: The question addressed by this study was whether SARS-CoV-2 spike protein could cause metabolic alterations in THP-1 cells and whether Antcin A can reverse these alterations. Conclusion: The findings of this study suggest that SARS-CoV-2 spike protein can cause significant metabolic alterations in THP-1 cells. Antcin A, a natural compound, has the potential to reverse these metabolic alterations and may be a potential candidate for developing preventive or therapeutic agents for COVID-19.

Keywords: SARS-CoV-2-spike, ¹H-NMR, metabolomics, antcin-A, taiwanofungus camphoratus

Procedia PDF Downloads 71
1346 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 150
1345 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 203
1344 Working Towards More Sustainable Food Waste: A Circularity Perspective

Authors: Rocío González-Sánchez, Sara Alonso-Muñoz

Abstract:

Food waste implies an inefficient management of the final stages in the food supply chain. Referring to Sustainable Development Goals (SDGs) by United Nations, the SDG 12.3 proposes to halve per capita food waste at the retail and consumer level and to reduce food losses. In the linear system, food waste is disposed and, to a lesser extent, recovery or reused after consumption. With the negative effect on stocks, the current food consumption system is based on ‘produce, take and dispose’ which put huge pressure on raw materials and energy resources. Therefore, greater focus on the circular management of food waste will mitigate the environmental, economic, and social impact, following a Triple Bottom Line (TBL) approach and consequently the SDGs fulfilment. A mixed methodology is used. A total sample of 311 publications from Web of Science database were retrieved. Firstly, it is performed a bibliometric analysis by SciMat and VOSviewer software to visualise scientific maps about co-occurrence analysis of keywords and co-citation analysis of journals. This allows for the understanding of the knowledge structure about this field, and to detect research issues. Secondly, a systematic literature review is conducted regarding the most influential articles in years 2020 and 2021, coinciding with the most representative period under study. Thirdly, to support the development of this field it is proposed an agenda according to the research gaps identified about circular economy and food waste management. Results reveal that the main topics are related to waste valorisation, the application of waste-to-energy circular model and the anaerobic digestion process towards fossil fuels replacement. It is underlined that the use of food as a source of clean energy is receiving greater attention in the literature. There is a lack of studies about stakeholders’ awareness and training. In addition, available data would facilitate the implementation of circular principles for food waste recovery, management, and valorisation. The research agenda suggests that circularity networks with suppliers and customers need to be deepened. Technological tools for the implementation of sustainable business models, and greater emphasis on social aspects through educational campaigns are also required. This paper contributes on the application of circularity to food waste management by abandoning inefficient linear models. Shedding light about trending topics in the field guiding to scholars for future research opportunities.

Keywords: bibliometric analysis, circular economy, food waste management, future research lines

Procedia PDF Downloads 112