Search results for: search algorithms
256 Ethicality of Algorithmic Pricing and Consumers’ Resistance
Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos
Abstract:
Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality
Procedia PDF Downloads 91255 Acute Antihyperglycemic Activity of a Selected Medicinal Plant Extract Mixture in Streptozotocin Induced Diabetic Rats
Authors: D. S. N. K. Liyanagamage, V. Karunaratne, A. P. Attanayake, S. Jayasinghe
Abstract:
Diabetes mellitus is an ever increasing global health problem which causes disability and untimely death. Current treatments using synthetic drugs have caused numerous adverse effects as well as complications, leading research efforts in search of safe and effective alternative treatments for diabetes mellitus. Even though there are traditional Ayurvedic remedies which are effective, due to a lack of scientific exploration, they have not been proven to be beneficial for common use. Hence the aim of this study is to evaluate the traditional remedy made of mixture of plant components, namely leaves of Murraya koenigii L. Spreng (Rutaceae), cloves of Allium sativum L. (Amaryllidaceae), fruits of Garcinia queasita Pierre (Clusiaceae) and seeds of Piper nigrum L. (Piperaceae) used for the treatment of diabetes. We report herein the preliminary results for the in vivo study of the anti-hyperglycaemic activity of the extracts of the above plant mixture in Wistar rats. A mixture made out of equal weights (100 g) of the above mentioned medicinal plant parts were extracted into cold water, hot water (3 h reflux) and water: acetone mixture (1:1) separately. Male wistar rats were divided into six groups that received different treatments. Diabetes mellitus was induced by intraperitoneal administration of streptozotocin at a dose of 70 mg/ kg in male Wistar rats in group two, three, four, five and six. Group one (N=6) served as the healthy untreated and group two (N=6) served as diabetic untreated control and both groups received distilled water. Cold water, hot water, and water: acetone plant extracts were orally administered in diabetic rats in groups three, four and five, respectively at different doses of 0.5 g/kg (n=6), 1.0 g/kg(n=6) and 1.5 g/kg(n=6) for each group. Glibenclamide (0.5 mg/kg) was administered to diabetic rats in group six (N=6) served as the positive control. The acute anti-hyperglycemic effect was evaluated over a four hour period using the total area under the curve (TAUC) method. The results of the test group of rats were compared with the diabetic untreated control. The TAUC of healthy and diabetic rats were 23.16 ±2.5 mmol/L.h and 58.31±3.0 mmol/L.h, respectively. A significant dose dependent improvement in acute anti-hyperglycaemic activity was observed in water: acetone extract (25%), hot water extract ( 20 %), and cold water extract (15 %) compared to the diabetic untreated control rats in terms of glucose tolerance (P < 0.05). Therefore, the results suggest that the plant mixture has a potent antihyperglycemic effect and thus validating their used in Ayurvedic medicine for the management of diabetes mellitus. Future studies will be focused on the determination of the long term in vivo anti-diabetic mechanisms and isolation of bioactive compounds responsible for the anti-diabetic activity.Keywords: acute antihyperglycemic activity, herbal mixture, oral glucose tolerance test, Sri Lankan medicinal plant extracts
Procedia PDF Downloads 179254 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism
Authors: Lubos Rojka
Abstract:
The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.Keywords: consciousness, free will, determinism, emergence, moral responsibility
Procedia PDF Downloads 164253 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 77252 The Effect of Law on Society
Authors: Rezki Omar
Abstract:
Openness cosmic shares dramatically in the order of something quite a bit of neglected priorities within the community at the level of thought and consciousness, and these priorities provider of legal and human rights awareness after a long delay in the process of awareness of human rights, there is no doubt that the long and arduous road. As is obvious to any observer public affairs as well as the specialist and the observer that there is growth and development in the scene and the legal movement is unprecedented, many when dealing with many of the details sought and tries as much as possible to know what is the natural rights, and duties that must comply with legally in no charge with the issue of what is going on, any attempt of weakness and lack of self-reliance and obstacles level during the search show him by virtue of the difficulty of the availability of legal information in some cases on a particular issue, whether or not the image is complete, legally insufficient. Law relationship to society basically a close relationship, there is no law society, a society is impossible without both at the level of domestic relations or international law: «There is a close link between law and society. The law remains influenced by the society in which it grew, as well as the law affects the society, which is governed by, the relationship between the community and law affected and the impact of relationship ». The law of the most important objectives of protecting members of society, and its role is based on the distribution of rights and duties in a fair way, and protect the public interest of the citizen’s basis. The word community when some sociologists are limited to the group that gathered, including cultural unity Cultural Group distinguish between society and the last. In the recent period issued a set of regulations in the various branches of law, which is different from the class and important one hand, and here is important study of the interaction between law and society, and how to make the laws effective in the community? The opposite is true as well. The law as a social phenomenon is impossible to understand and analyzed without taking into account the extent of their impact and vulnerability within the community and accepted. Must evoke the basis that it was developed to address the problems faced by citizens. The over-age and amplify the sanctions are a contradiction of that fundamental reform of the basic objectives of the offender more than anything else Calantqam and revenge, and if the process is not human mistakes. Michel Foucault believes that «tighten laws and regulations against criminals will not reduce the crime rate in the community, so you must activate the system of moral values of society after more deterrent, and the threat of scandal on a social level.» Besson and refers to the legislators, saying the law: «The only way to reduce the crime rate to strengthen the ethical system of the society, especially in the social Amnhoha sanctity of conscience, then you will not be forced to issue harsh sentences against criminals».In summary, it is necessary to combine the enactment of laws and activate the system of moral values and educational values on the ground, and to understand the causes of social problems at the root of all for the equation is complete, and that the law was drafted to serve the citizens and not to harm him.Keywords: legislators, distinguish, awareness, insufficient
Procedia PDF Downloads 493251 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia
Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić
Abstract:
Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction
Procedia PDF Downloads 186250 Nephroprotective Effect of Aqueous Extract of Plectranthus amboinicus (Roxb.) Leaves in Adriamycin Induced Acute Renal Failure in Wistar Rats: A Biochemical and Histopathological Assessment
Authors: Ampe Mohottige Sachinthi Sandaruwani Amarasiri, Anoja Priyadarshani Attanayake, Kamani Ayoma Perera Wijewardana Jayatilaka, Lakmini Kumari Boralugoda Mudduwa
Abstract:
The search for alternative pharmacological therapies based on natural extracts for renal failure has become an urgent need, due to paucity of effective pharmacotherapy. The current study was undertaken to evaluate the acute nephroprotective effect of aqueous leaf extract of Plectranthus amboinicus (Roxb.) (Family: Lamiaceae), a medicinal plant used in traditional Ayurvedic medicine for the management of renal diseases in Sri Lanka. The study was performed in adriamycin (ADR) induced nephrotoxic in Wistar rats. Wistar rats were randomly divided into four groups each with six rats. A single dose of ADR (20 mg/kg body wt., ip) was used for the induction of nephrotoxicity in all groups of rats except group one. The treatments were started 24 hours after induction of nephrotoxicity and continued for three days. Group one and two served as healthy and nephrotoxic control rats and were administered equivalent volumes of normal saline (0.9% NaCl) orally. Group three and four nephrotoxic rats were administered the lyophilized powder of the aqueous extract of P. amboinicus (400 mg/ kg body wt.; equivalent human therapeutic dose) and the standard drug, fosinopril sodium (0.09 mg/ kg body wt.) respectively. Urine and blood samples were collected from rats in each group at the end of the period of intervention for the estimation of selected renal parameters. H and E stained sections of the kidney tissues were examined for histopathological changes. Rats treated with the plant extract showed significant improvement in biochemical parameters and histopathological changes compared to ADR induced nephrotoxic group. The elevation of serum concentrations of creatinine and β2-microglobulin were decreased by 38%, and 66% in plant extract treated nephrotoxic rats respectively (p < 0.05). In addition, serum concentrations of total protein and albumin were significantly increased by 25% and 14% in rats treated with P. amboinicus respectively (p < 0.05). The results of β2 –microglobulin and serum total protein demonstrated a significant reduction in the elevated values in rats administered with the plant extract (400 mg/kg) compared to that of fosinopril (0.09 mg/kg). Urinary protein loss in 24hr urine samples was significantly decreased in rats treated with both fosinopril (86%) and P. ambonicus (56%) at the end of the intervention (p < 0.01). Accordingly, an attenuation of morphological destruction was observed in the H and E stained sections of the kidney with the treatments of plant extract and fosinopril. The results of the present study revealed that the aqueous leaf extract of P. amboinicus possesses significant nephroprotective activity at the equivalent therapeutic dose of 400 mg/ kg against adriamycin induced acute nephrotoxicity.Keywords: biochemical assessment, histopathological assessment, nephroprotective activity, Plectranthus amboinicus
Procedia PDF Downloads 146249 The Beauty and the Cruel: The Price of Ethics
Authors: Camila Lee Park, Mauro Fracarolli Nunes
Abstract:
Understood as the preference for products and services that do not involve moral dilemmas, ethical consumption has been increasingly discussed by scholars, practitioners, and consumers. Among its diverse trends, the defense of animal rights and welfare seems to have gained particular momentum in past decades. Not surprisingly, companies, governments, ideologues, and virtually any institution or group interested in (re)shaping society invest in the building of narratives oriented to influence consumption behavior. The animal rights movement, for example, is devoted to the elimination of the use of animals in science, as well as of commercial animal agriculture and hunting activities. Although advances in ethical consumption may be observed in practice, it still seems more popular as rhetoric. Diverse scholars have addressed the disparities between self-professed ethical consumers and their actual purchase patterns, with differences being attributed to factors such as price sensitivity, lack of information, quality, cynicism, and limited availability. The gap is also linked to the 'consumer sovereignty myth', according to which consumers are only able to choose from a pre-determined range of choices made before products reach them. On the other hand, academics also debate ethical consumption behavior as more likely to occur when it assumes compliance with social norms. As sustainability becomes a permanent issue, customers may tend to adhere to ethical consumption, either because of an individual value or due to a social one. Regardless of these efforts, the actual value attributed to ethical businesses remains unclear. Likewise, the power of stakeholders’ initiatives to influence corporate strategies is dubious. In search to offer new perspectives on these matters, the present study concentrates on the following research questions: Do customers value products/companies that respect animal rights? If so, does such enhanced value convert into actions from the part of the companies? Broadly, we aim to understand if customers’ perception holds performative traits (i.e., are capable of either trigger or contribute to changes in organizational behaviour around the respect for animal rights). In addressing these issues, two preliminary behavioral vignette-based experiments were conducted, with the perspectives of 307 participants being assessed. Building on a case of the cosmetics industry, social, emotional, and functional values were hypothesized as directly impacting positive word-of-mouth, which, in turn, would carry direct effects on purchase intention. A first structural equation model was analyzed with the combined samples of studies I and II. Results suggest that emotional value strongly impacts both positive word-of-mouth and purchase intention. Data confirms initial expectations on customers valuing products and companies that comply with ethical postures concerning animals, especially if social-oriented practices are also present.Keywords: animal rights, business ethics, emotional value, ethical consumption
Procedia PDF Downloads 119248 Bi-objective Network Optimization in Disaster Relief Logistics
Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann
Abstract:
Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks
Procedia PDF Downloads 79247 A Literature Review Evaluating the Use of Online Problem-Based Learning and Case-Based Learning Within Dental Education
Authors: Thomas Turner
Abstract:
Due to the Covid-19 pandemic alternative ways of delivering dental education were required. As a result, many institutions moved teaching online. The impact of this is poorly understood. Is online problem-based learning (PBL) and case-based learning (CBL) effective and is it suitable in the post-pandemic era? PBL and CBL are both types of interactive, group-based learning which are growing in popularity within many dental schools. PBL was first introduced in the 1960’s and can be defined as learning which occurs from collaborative work to resolve a problem. Whereas CBL encourages learning from clinical cases, encourages application of knowledge and helps prepare learners for clinical practice. To evaluate the use of online PBL and CBL. A literature search was conducted using the CINAHL, Embase, PubMed and Web of Science databases. Literature was also identified from reference lists. Studies were only included from dental education. Seven suitable studies were identified. One of the studies found a high learner and facilitator satisfaction rate with online CBL. Interestingly one study found learners preferred CBL over PBL within an online format. A study also found, that within the context of distance learning, learners preferred a hybrid curriculum including PBL over a traditional approach. A further study pointed to the limitations of PBL within an online format, such as reduced interaction, potentially hindering the development of communication skills and the increased time and technology support required. An audience response system was also developed for use within CBL and had a high satisfaction rate. Interestingly one study found achievement of learning outcomes was correlated with the number of student and staff inputs within an online format. Whereas another study found the quantity of learner interactions were important to group performance, however the quantity of facilitator interactions was not. This review identified generally favourable evidence for the benefits of online PBL and CBL. However, there is limited high quality evidence evaluating these teaching methods within dental education and there appears to be limited evidence comparing online and faceto-face versions of these sessions. The importance of the quantity of learner interactions is evident, however the importance of the quantity of facilitator interactions appears to be questionable. An element to this may be down to the quality of interactions, rather than just quantity. Limitations of online learning regarding technological issues and time required for a session are also highlighted, however as learners and facilitators get familiar with online formats, these may become less of an issue. It is also important learners are encouraged to interact and communicate during these sessions, to allow for the development of communication skills. Interestingly CBL appeared to be preferred to PBL in an online format. This may reflect the simpler nature of CBL, however further research is required to explore this finding. Online CBL and PBL appear promising, however further research is required before online formats of these sessions are widely adopted in the post-pandemic era.Keywords: case-based learning, online, problem-based learning, remote, virtual
Procedia PDF Downloads 77246 Sustainability in Space: Material Efficiency in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
From addressing fundamental questions about the history of the solar system to exploring other planets for any signs of life have always been the core of human space exploration. This triggered humans to explore whether other planets such as Mars could support human life on them. Therefore, many planned space missions to other planets have been designed and conducted to examine the feasibility of human survival on them. However, space missions are expensive and consume a large number of various resources to be successful. To overcome these problems, material efficiency shall be maximized through the use of reusable launch vehicles (RLV) rather than disposable and expendable ones. Material efficiency is defined as a way to achieve service requirements using fewer materials to reduce CO2 emissions from industrial processes. Materials such as aluminum-lithium alloys, steel, Kevlar, and reinforced carbon-carbon composites used in the manufacturing of spacecrafts could be reused in closed-loop cycles directly or by adding a protective coat. Material efficiency is a fundamental principle of a circular economy. The circular economy aims to cutback waste and reduce pollution through maximizing material efficiency so that businesses can succeed and endure. Five strategies have been proposed to improve material efficiency in the space industry, which includes waste minimization, introduce Key Performance Indicators (KPIs) to measure material efficiency, and introduce policies and legislations to improve material efficiency in the space sector. Another strategy to boost material efficiency is through maximizing resource and energy efficiency through material reusability. Furthermore, the environmental effects associated with the rapid growth in the number of space missions include black carbon emissions that lead to climate change. The levels of emissions must be tracked and tackled to ensure the safe utilization of space in the future. The aim of this research paper is to examine and suggest effective methods used to improve material efficiency in space missions so that space and Earth become more environmentally and economically sustainable. The objectives used to fulfill this aim are to identify the materials used in space missions that are suitable to be reused in closed-loop cycles considering material efficiency indicators and circular economy concepts. An explanation of how spacecraft materials could be re-used as well as propose strategies to maximize material efficiency in order to make RLVs possible so that access to space becomes affordable and reliable is provided. Also, the economic viability of the RLVs is examined to show the extent to which the use of RLVs has on the reduction of space mission costs. The environmental and economic implications of the increase in the number of space missions as a result of the use of RLVs are also discussed. These research questions are studied through detailed critical analysis of the literature, such as published reports, books, scientific articles, and journals. A combination of keywords such as material efficiency, circular economy, RLVs, and spacecraft materials were used to search for appropriate literature.Keywords: access to space, circular economy, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 113245 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 97244 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73243 Difficulties for Implementation of Telenursing: An Experience Report
Authors: Jacqueline A. G. Sachett, Cláudia S. Nogueira, Diana C. P. Lima, Jessica T. S. Oliveira, Guilherme K. M. Salazar, Lílian K. Aguiar
Abstract:
The Polo Amazon Telehealth offers several tools for professionals working in Primary Health Care as a second formative opinion, teleconsulting and training between the different areas, whether medicine, dentistry, nursing, physiotherapy, among others. These activities have a monthly schedule of free access to the municipalities of Amazonas registered. With this premise, and in partnership with the University of the State of Amazonas (UEA), is promoting the practice of the triad; teaching-research-extension in order to collaborate with the enrichment and acquisition of knowledge through educational practices carried out through teleconferences. Therefore, nursing is to join efforts and inserts as a collaborator of this project running, contributing to the education and training of these professionals who are part of the health system in full Amazon. The aim of this study is to report the experience of academic of Amazonas State University nursing course, about the experience in the extension project underway in Polo Telemedicine Amazon. This was a descriptive study, the experience report type, about the experience of nursing academic UEA, by extension 'Telenursing: teleconsulting and second formative opinion for FHS professionals in the state of Amazonas' project, held in Polo Telemedicine Amazon, through an agreement with the UEA and funded by the Foundation of Amazonas Research from July / 2012 to July / 2016. Initially developed active search of members of the Family Health Strategy professionals, in order to provide training and training teams to use the virtual clinic, as well as the virtual environment is the focus of this tool design. The election period was an aggravating factor for the implementation of teleconsulting proposal, due to change of managers in each municipality, requiring the stoppage until they assume their positions. From this definition, we established the need for new training. The first video conference took place on 03.14.2013 for learning and training in the use of Virtual Learning Environment and Virtual Clinic, with the participation of municipalities of Novo Aripuanã, São Paulo de Olivença and Manacapuru. During the whole project was carried out literature about what is being done and produced at the national level about the subject. By the time the telenursing project has received twenty-five (25) consultancy requests. The consultants sent by nursing professionals, all have been answered to date. Faced with the lived experience, particularly in video conferencing, face to cause difficulties issues, such as the fluctuation in the number of participants in activities, difficulty of participants to reconcile the opening hours of the units with the schedule of video conferencing, transmission difficulties and changes schedule. It was concluded that the establishment of connection between the Telehealth points is one of the main factors for the implementation of Telenursing and that this feature is still new for nursing. However, effective training and updating, may provide to these professional category subsidies to quality health care in the Amazon.Keywords: Amazon, teleconsulting, telehealth, telenursing
Procedia PDF Downloads 310242 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 303241 Compass Bar: A Visualization Technique for Out-of-View-Objects in Head-Mounted Displays
Authors: Alessandro Evangelista, Vito M. Manghisi, Michele Gattullo, Enricoandrea Laviola
Abstract:
In this work, we propose a custom visualization technique for Out-Of-View-Objects in Virtual and Augmented Reality applications using Head Mounted Displays. In the last two decades, Augmented Reality (AR) and Virtual Reality (VR) technologies experienced a remarkable growth of applications for navigation, interaction, and collaboration in different types of environments, real or virtual. Both environments can be potentially very complex, as they can include many virtual objects located in different places. Given the natural limitation of the human Field of View (about 210° horizontal and 150° vertical), humans cannot perceive objects outside this angular range. Moreover, despite recent technological advances in AR e VR Head-Mounted Displays (HMDs), these devices still suffer from a limited Field of View, especially regarding Optical See-Through displays, thus greatly amplifying the challenge of visualizing out-of-view objects. This problem is not negligible when the user needs to be aware of the number and the position of the out-of-view objects in the environment. For instance, during a maintenance operation on a construction site where virtual objects serve to improve the dangers' awareness. Providing such information can enhance the comprehension of the scene, enable fast navigation and focused search, and improve users' safety. In our research, we investigated how to represent out-of-view-objects in HMD User Interfaces (UI). Inspired by commercial video games such as Call of Duty Modern Warfare, we designed a customized Compass. By exploiting the Unity 3D graphics engine, we implemented our custom solution that can be used both in AR and VR environments. The Compass Bar consists of a graduated bar (in degrees) at the top center of the UI. The values of the bar range from -180 (far left) to +180 (far right), the zero is placed in front of the user. Two vertical lines on the bar show the amplitude of the user's field of view. Every virtual object within the scene is represented onto the compass bar as a specific color-coded proxy icon (a circular ring with a colored dot at its center). To provide the user with information about the distance, we implemented a specific algorithm that increases the size of the inner dot as the user approaches the virtual object (i.e., when the user reaches the object, the dot fills the ring). This visualization technique for out-of-view objects has some advantages. It allows users to be quickly aware of the number and the position of the virtual objects in the environment. For instance, if the compass bar displays the proxy icon at about +90, users will immediately know that the virtual object is to their right and so on. Furthermore, by having qualitative information about the distance, users can optimize their speed, thus gaining effectiveness in their work. Given the small size and position of the Compass Bar, our solution also helps lessening the occlusion problem thus increasing user acceptance and engagement. As soon as the lockdown measures will allow, we will carry out user-tests comparing this solution with other state-of-the-art existing ones such as 3D Radar, SidebARs and EyeSee360.Keywords: augmented reality, situation awareness, virtual reality, visualization design
Procedia PDF Downloads 127240 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 136239 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review
Authors: Hanan Algarni
Abstract:
Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.Keywords: virtual reality, treadmill, stroke, gait rehabilitation
Procedia PDF Downloads 274238 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 74237 Constructing and Circulating Knowledge in Continuous Education: A Study of Norwegian Educational-Psychological Counsellors' Reflection Logs in Post-Graduate Education
Authors: Moen Torill, Rismark Marit, Astrid M. Solvberg
Abstract:
In Norway, every municipality shall provide an educational psychological service, EPS, to support kindergartens and schools in their work with children and youths with special needs. The EPS focus its work on individuals, aiming to identify special needs and to give advice to teachers and parents when they ask for it. In addition, the service also give priority to prevention and system intervention in kindergartens and schools. To master these big tasks university courses are established to support EPS counsellors' continuous learning. There is, however, a need for more in-depth and systematic knowledge on how they experience the courses they attend. In this study, EPS counsellors’ reflection logs during a particular course are investigated. The research question is: what are the content and priorities of the reflections that are communicated in the logs produced by the educational psychological counsellors during a post-graduate course? The investigated course is a credit course organized over a one-year period in two one-semester modules. The altogether 55 students enrolled in the course work as EPS counsellors in various municipalities across Norway. At the end of each day throughout the course period, the participants wrote reflection logs about what they had experienced during the day. The data material consists of 165 pages of typed text. The collaborating researchers studied the data material to ascertain, differentiate and understand the meaning of the content in each log. The analysis also involved the search for similarity in content and development of analytical categories that described the focus and primary concerns in each of the written logs. This involved constant 'critical and sustained discussions' for mutual construction of meaning between the co-researchers in the developing categories. The process is inspired by Grounded Theory. This means that the concepts developed during the analysis derived from the data material and not chosen prior to the investigation. The analysis revealed that the concept 'Useful' frequently appeared in the participants’ reflections and, as such, 'Useful' serves as a core category. The core category is described through three major categories: (1) knowledge sharing (concerning direct and indirect work with students with special needs) with colleagues is useful, (2) reflections on models and theoretical concepts (concerning students with special needs) are useful, (3) reflection on the role as EPS counsellor is useful. In all the categories, the notion of useful occurs in the participants’ emphasis on and acknowledgement of the immediate and direct link between the university course content and their daily work practice. Even if each category has an importance and value of its own, it is crucial that they are understood in connection with one another and as interwoven. It is the connectedness that gives the core category an overarching explanatory power. The knowledge from this study may be a relevant contribution when it comes to designing new courses that support continuing professional development for EPS counsellors, whether for post-graduate university courses or local courses at the EPS offices or whether in Norway or other countries in the world.Keywords: constructing and circulating knowledge, educational-psychological counsellor, higher education, professional development
Procedia PDF Downloads 115236 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.Keywords: business value, financial ratios, performance measurement, value drivers
Procedia PDF Downloads 222235 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127234 Anti-Hyperglycemic Effects and Chemical Analysis of Allium sativum Bulbs Growing in Sudan
Authors: Ikram Mohamed Eltayeb Elsiddig, Yacouba Amina Djamila, Amna El Hassan Hamad
Abstract:
Hyperglycemia and diabetes have been treated with several medicinal plants for a long time, meanwhile reduce associated side effects than the synthetic ones. Therefore, the search for more effective and safer anti-diabetic agents derived from plants has become an interest area of active research. A. sativum, belonging to the Liliaceae family is well known for its medicinal uses in African traditional medicine, it used for treating of many human diseases mainly diabetes, high cholesterol, and high blood pressure. The present study was carried out to investigate the anti-hyperglycemic effect of the extracts of A. sativum bulb growing in Sudan on glucose-loaded Wistar albino rats. A. sativum bulbs were collected from local vegetable market at Khourtoum/ Sudan in a fresh form, identified and authenticated by taxonomist, then dried, and extracted with solvents of increasing polarity: petroleum ether, chloroform, ethyl acetate and methanol by using Soxhlet apparatus. The effect of the extracts on glucose uptake was evaluated by using the isolated rats hemidiaphgrams after loading the fasting rats with glucose, and the anti-hyperglycemic effect was investigated on glucose-loaded Wistar albino rats. Their effects were compared to control rats administered with the vehicle and to a standard group administered with Metformin standard drug. The most active extract was analyzed chemically using GC-MS analysis compared to NIST library. The results showed significant anti-diabetic effect of extracts of A. sativum bulb growing in Sudan. Addition to the hypoglycemic activity of A. sativum extracts was found to be decreased with increase in the polarity of the extraction solvent; this may explain the less polarity of substance responsible for the activity and their concentration decreased with polarity increase. The petroleum ether extract possess anti-hyperglycemic activity more significant than the other extracts and the Metformin standard drug with p-value 0.000** of 400mg/kg at 1 hour, 2 hour and four hour; and p-value 0.019*, 0.015* and 0.010* of 200mg/kg at 1 hour, 2 hour and four hour respectively. The GC-MS analysis of petroleum ether extract, with highest anti -diabetes activity showed the presence of Methyl linolate (42.75%), Hexadecanoic acid, methyl ester (10.54%), Methyl α-linolenate (8.36%), Dotriacontane (6.83), Tetrapentacontane (6.33), Methyl 18-methylnonadecanoate (4.8), Phenol,2,2’-methylenebis[6-(1,1-dimethylethyl)-4-methyl] (3.25), Methyl 20-methyl-heneicosanoate (2.70), Pentatriacontane (2.13) and many other minor compounds. The most of these compounds are well known for their anti-diabetic activity. The study concluded that A. sativum bulbs extracts were found to enhanced the reuptake of glucose in the isolated rat hemidiaphragm and have antihyperglycemic effect when evaluated on glucose-loaded albino rats with petroleum ether extract activity more significant than the Metformin standard drug.Keywords: Allium, anti-hyperglycemic, bulbs, sativum
Procedia PDF Downloads 168233 Evaluating the Benefits of Intelligent Acoustic Technology in Classrooms: A Case Study
Authors: Megan Burfoot, Ali GhaffarianHoseini, Nicola Naismith, Amirhosein GhaffarianHoseini
Abstract:
Intelligent Acoustic Technology (IAT) is a novel architectural device used in buildings to automatically vary the acoustic conditions of space. IAT is realized by integrating two components: Variable Acoustic Technology (VAT) and an intelligent system. The VAT passively alters the RT by changing the total sound absorption in a room. In doing so, the Reverberation Time (RT) is changed and thus, the sound strength and clarity are altered. The intelligent system detects sound waves in real-time to identify the aural situation, and the RT is adjusted accordingly based on pre-programmed algorithms. IAT - the synthesis of these two components - can dramatically improve acoustic comfort, as the acoustic condition is automatically optimized for any detected aural situation. This paper presents an evaluation of the improvements of acoustic comfort in an existing tertiary classroom located at Auckland University of Technology in New Zealand. This is a pilot case study, the first of its’ kind attempting to quantify the benefits of IAT. Naturally, the potential acoustic improvements from IAT can be actualized by only installing the VAT component of IAT and by manually adjusting it rather than utilizing an intelligent system. Such a simplified methodology is adopted for this case study to understand the potential significance of IAT without adopting a time and cost-intensive strategy. For this study, the VAT is built by overlaying reflective, rotating louvers over sound absorption panels. RT's are measured according to international standards before and after installing VAT in the classroom. The louvers are manually rotated in increments by the experimenter and further RT measurements are recorded. The results are compared with recommended guidelines and reference values from national standards for spaces intended for speech and communication. The results obtained from the measurements are used to quantify the potential improvements in classroom acoustic comfort, where IAT to be used. This evaluation reveals the current existence of poor acoustic conditions in the classroom caused by high RT's. The poor acoustics are also largely attributed to the classrooms’ inability to vary acoustic parameters for changing aural situations. The classroom experiences one static acoustic state, neglecting to recognize the nature of classrooms as flexible, dynamic spaces. Evidently, when using VAT the classroom is prescribed with a wide range of RTs it can achieve. Namely, acoustic requirements for varying teaching approaches are satisfied, and acoustic comfort is improved. By quantifying the benefits of using VAT, it can confidently suggest these same benefits are achieved with IAT. Nevertheless, it is encouraged that future studies continue this line of research toward the eventual development of IAT and its’ acceptance into mainstream architecture.Keywords: acoustic comfort, classroom acoustics, intelligent acoustics, variable acoustics
Procedia PDF Downloads 188232 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 233231 The Hidden Mechanism beyond Ginger (Zingiber officinale Rosc.) Potent in vivo and in vitro Anti-Inflammatory Activity
Authors: Shahira M. Ezzat, Marwa I. Ezzat, Mona M. Okba, Esther T. Menze, Ashraf B. Abdel-Naim, Shahnas O. Mohamed
Abstract:
Background: In order to decrease the burden of the high cost of synthetic drugs, it is important to focus on phytopharmaceuticals. The aim of our study was to search for the mechanism of ginger (Zingiber officinale Roscoe) anti-inflammatory potential and to correlate it to its biophytochemicals. Methods: Various extracts viz. water, 50%, 70%, 80%, and 90% ethanol were prepared from ginger rhizomes. Fractionation of the aqueous extract (AE) was accomplished using Diaion HP-20. In vitro anti-inflammatory activity of the different extracts and isolated compounds was evaluated by protein denaturation inhibition, membrane stabilization, protease inhibition, and anti-lipoxygenase assays. In vivo anti-inflammatory activity of AE was estimated by assessment of rat paw oedema after carrageenan injection. Prostaglandin E2 (PGE2), certain inflammation markers (TNF-α, IL-6, IL-1α, IL-1β, INFr, MCP-1MIP, RANTES, and Nox) levels and MPO activity in the paw edema exudates were measured. Total antioxidant capacity (TAC) was also determined. Histopathological alterations of paw tissues were scored. Results: All the tested extracts showed significant (p < 0.1) anti-inflammatory activities. The highest percentage of heat induced albumin denaturation (66%) was exhibited by the 50% ethanol (250 μg/ml). The 70 and 90% ethanol extracts (500 μg/ml) were more potent as membrane stabilizers (34.5 and 37%, respectively) than diclofenac (33%). The 80 and 90% ethanol extracts (500 μg/ml) showed maximum protease inhibition (56%). The strongest anti-lipoxygenase activity was observed for the AE. It showed more significant lipoxygenase inhibition activity than that of diclofenac (58% and 52%, respectively) at the same concentration (125 μg/ml). Fractionation of AE yielded four main fractions (Fr I-IV) which showed significant in vitro anti-inflammatory. Purification of Fr-III and IV led to the isolation of 6-poradol (G1), 6-shogaol (G2); methyl 6- gingerol (G3), 5-gingerol (G4), 6-gingerol (G5), 8-gingerol (G6), 10-gingerol (G7), and 1-dehydro-6-gingerol (G8). G2 (62.5 ug/ml), G1 (250 ug/ml), and G8 (250 ug/ml) exhibited potent anti-inflammatory activity in all studied assays, while G4 and G5 exhibited moderate activity. In vivo administration of AE ameliorated rat paw oedema in a dose-dependent manner. AE (at 200 mg/kg) showed significant reduction (60%) of PGE2 production. The AE at different doses (at 25-200 mg/kg) showed significant reduction in inflammatory markers except for IL-1α. AE (at 25 mg/kg) is superior to indomethacin in reduction of IL-1β. Treatment of animals with the AE (100, 200 mg/kg) or indomethacin (10 mg/kg) showed significant reduction in TNF-α, IL-6, MCP-1, and RANTES levels, and MPO activity by about (31, 57 and 32% ) (65, 60 and 57%) (27, 41 and 28%) (23, 32 and 23%) (66, 67 and 67%) respectively. AE at 100 and 200 mg/kg was equipotent to indomethacin in reduction of NOₓ level and in increasing the TAC. Histopathological examination revealed very few inflammatory cells infiltration and oedema after administration of AE (200 mg/kg) prior to carrageenan. Conclusion: Ginger anti-inflammatory activity is mediated by inhibiting macrophage and neutrophils activation as well as negatively affecting monocyte and leukocyte migration. Moreover, it produced dose-dependent decrease in pro-inflammatory cytokines and chemokines and replenished the total antioxidant capacity. We strongly recommend future investigations of ginger in the potential signal transduction pathways.Keywords: anti-lipoxygenase activity, inflammatory markers, 1-dehydro-6-gingerol, 6-shogaol
Procedia PDF Downloads 252230 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128229 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 184228 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century
Authors: O. R. Razanakoto, L. Temple
Abstract:
Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk
Procedia PDF Downloads 110227 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment
Authors: Emmanuel Ogala
Abstract:
Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment
Procedia PDF Downloads 140