Search results for: dynamic capability approach
10215 Innovative Design of Spherical Robot with Hydraulic Actuator
Authors: Roya Khajepour, Alireza B. Novinzadeh
Abstract:
In this paper, the spherical robot is modeled using the Band-Graph approach. This breed of robots is typically employed in expedition missions to unknown territories. Its motion mechanism is based on convection of a fluid in a set of three donut vessels, arranged orthogonally in space. This robot is a non-linear, non-holonomic system. This paper utilizes the Band-Graph technique to derive the torque generation mechanism in a spherical robot. Eventually, this paper describes the motion of a sphere due to the exerted torque components.Keywords: spherical robot, Band-Graph, modeling, torque
Procedia PDF Downloads 35010214 Pros and Cons of Agriculture Investment in Gambella Region, Ethiopia
Authors: Azeb Degife
Abstract:
Over the past few years, the volume of international investment in agricultural land has increased globally. In recent times, Ethiopian government uses agricultural investment as one of the most important and effective strategies for economic growth, food security and poverty reduction in rural areas. Since the mid-2000s, government has awarded millions of hectares of most fertile land to rich countries and some of the world's most wealthy people to export various kinds of crop, often in long-term leases and at bargain prices. This study focuses on the pros and cons of large-scale agriculture investment Gambella region, Ethiopia. The main results were generated both from primary and secondary data sources. Primary data are obtained through interview, direct observation and a focus group discussion (FGDs). The secondary data are obtained from published documents, reports from governmental and non-governmental institutions. The findings of the study demonstrated that agriculture investment has advantages on the socio-economic and disadvantages on socio-environmental aspects. The main benefits agriculture investments in the region are infrastructural development and generation employment for the local people. Further, the Ethiopian government also generates foreign currency from the agriculture investment opportunities. On the other hand, Gambella people are strongly tied to the land and the rivers that run through in the region. However, now large-scale agricultural investment by foreign and local investors on an industrial scale results deprives people livelihoods and natural resources of the region. Generally, the negative effects of agriculture investment include increasing food insecurity, and displacement of smallholder farmers and pastoralists. Moreover, agriculture investment has strong adverse environmental impacts on natural resources such as land, water, forests and biodiversity. Therefore, an Ethiopian government strategy needs to focus on integration approach and sustainable agricultural growth.Keywords: agriculture investment, cons, displacement, Gambella, integration approach, pros, socio-economic, socio-environmental
Procedia PDF Downloads 34210213 Endocrine Disruptors Effects on the 20-Hydroxyecdysone Concentration and the Vitellogenin Gene Expression in Gammarus sp.
Authors: Eric Gismondi, Aurelie Bigot-Clivot
Abstract:
Endocrine disruptors (EDCs) are well known to disrupt the development and the reproduction of exposed organisms. Although this point has been studied in vertebrate models, the limited knowledge of the endocrine system of invertebrates makes the evaluation of EDCs effects difficult. However, invertebrates represent the major part of aquatic ecosystems, such as amphipods Gammaridea, which are crucial for their functioning (e.g., litter degradation, food resource). Moreover, gammarids are hosts of parasites such as vertically-transmitted microsporidia (microsporidia VT), which could be confounding factors in assessment of EDC effects. Indeed, some microsporidia VT could have endocrine effects by their own present in the host since it was observed for example, a feminization of juvenile males, which become phenotypic females. This work evaluated the impact of ethinylestradiol (EE₂, estrogenic), cyproterone acetate (CPA, anti-androgenic), 4-hydroxytamoxifen (4HT, anti-estrogenic) and 17α-methyltestosterone (17MT - androgenic), on the 20-hydroxyecdysone concentration (i.e. 20HE - molt process) and the vitellogenin gene expression (i.e. reproduction) in the freshwater amphipod Gammarus pulex, after a 96h laboratory exposure. In addition, the presence of microsporidia VT was verified in order to analyze the effect of this confounding factor. Results of this study shown that, although endocrine systems of invertebrates and vertebrates are different, EDCs proved in vertebrates could also affect biological functions hormonally controlled in invertebrates. Indeed, the molt process of crustaceans was disrupted in the first stage (i.e. 20-HE concentration) and therefore, could affect, at the long term, the population dynamic. In addition, it was observed that G. pulex was differently impacted according to the gender and parasitism, which underline the importance to take into account these confounding factors to better evaluate the EDCs impact on invertebrate populations.Keywords: endocrine disruption, gammarus sp., molt, parasitism
Procedia PDF Downloads 16410212 Experimental Research on Neck Thinning Dynamics of Droplets in Cross Junction Microchannels
Authors: Yilin Ma, Zhaomiao Liu, Xiang Wang, Yan Pang
Abstract:
Microscale droplets play an increasingly important role in various applications, including medical diagnostics, material synthesis, chemical engineering, and cell research due to features of high surface-to-volume ratio and tiny scale, which can significantly improve reaction rates, enhance heat transfer efficiency, enable high-throughput parallel studies as well as reduce reagent usage. As a mature technique to manipulate small amounts of liquids, droplet microfluidics could achieve the precise control of droplet parameters such as size, uniformity, structure, and thus has been widely adopted in the engineering and scientific research of multiple fields. Necking processes of the droplet in the cross junction microchannels are experimentally and theoretically investigated and dynamic mechanisms of the neck thinning in two different regimes are revealed. According to evolutions of the minimum neck width and the thinning rate, the necking process is further divided into different stages and the main driving force during each stage is confirmed. Effects of the flow rates and the cross-sectional aspect ratio on the necking process as well as the neck profile at different stages are provided in detail. The distinct features of the two regimes in the squeezing stage are well captured by the theoretical estimations of the effective flow rate and the variations of the actual flow rates in different channels are reasonably reflected by the channel width ratio. In the collapsing stage, the quantitative relation between the minimum neck width and the remaining time is constructed to identify the physical mechanism.Keywords: cross junction, neck thinning, force analysis, inertial mechanism
Procedia PDF Downloads 11010211 Application of Water Soluble Polymers in Chemical Enhanced Oil Recovery
Authors: M. Shahzad Kamal, Abdullah S. Sultan, Usamah A. Al-Mubaiyedh, Ibnelwaleed A. Hussein
Abstract:
Oil recovery from reservoirs using conventional oil recovery techniques like water flooding is less than 20%. Enhanced oil recovery (EOR) techniques are applied to recover additional oil. Surfactant-polymer flooding is a promising EOR technique used to recover residual oil from reservoirs. Water soluble polymers are used to increase the viscosity of displacing fluids. Surfactants increase the capillary number by reducing the interfacial tension between oil and displacing fluid. Hydrolyzed polyacrylamide (HPAM) is widely used in polymer flooding applications due to its low cost and other desirable properties. HPAM works well in low-temperature and low salinity-environment. In the presence of salts HPAM viscosity decrease due to charge screening effect and it can precipitate at high temperatures in the presence of salts. Various strategies have been adopted to extend the application of water soluble polymers to high-temperature high-salinity (HTHS) reservoir. These include addition of monomers to acrylamide chain that can protect it against thermal hydrolysis. In this work, rheological properties of various water soluble polymers were investigated to find out suitable polymer and surfactant-polymer systems for HTHS reservoirs. Polymer concentration ranged from 0.1 to 1 % (w/v). Effect of temperature, salinity and polymer concentration was investigated using both steady shear and dynamic measurements. Acrylamido tertiary butyl sulfonate based copolymer showed better performance under HTHS conditions compared to HPAM. Moreover, thermoviscosifying polymer showed excellent rheological properties and increase in the viscosity was observed with increase temperature. This property is highly desirable for EOR application.Keywords: rheology, polyacrylamide, salinity, enhanced oil recovery, polymer flooding
Procedia PDF Downloads 41110210 Touching Interaction: An NFC-RFID Combination
Authors: Eduardo Álvarez, Gerardo Quiroga, Jorge Orozco, Gabriel Chavira
Abstract:
AmI proposes a new way of thinking about computers, which follows the ideas of the Ubiquitous Computing vision of Mark Weiser. In these, there is what is known as a Disappearing Computer Initiative, with users immersed in intelligent environments. Hence, technologies need to be adapted so that they are capable of replacing the traditional inputs to the system by embedding these in every-day artifacts. In this work, we present an approach, which uses Radiofrequency Identification (RFID) and Near Field Communication (NFC) technologies. In the latter, a new form of interaction appears by contact. We compare both technologies by analyzing their requirements and advantages. In addition, we propose using a combination of RFID and NFC.Keywords: touching interaction, ambient intelligence, ubiquitous computing, interaction, NFC and RFID
Procedia PDF Downloads 50510209 A Three-modal Authentication Method for Industrial Robots
Authors: Luo Jiaoyang, Yu Hongyang
Abstract:
In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.Keywords: multimodal, kinect, machine learning, distance image
Procedia PDF Downloads 7910208 Exploring Leadership Adaptability in the Private Healthcare Organizations in the UK in Times of Crises
Authors: Sade Ogundipe
Abstract:
The private healthcare sector in the United Kingdom has experienced unprecedented challenges during times of crisis, necessitating effective leadership adaptability. This qualitative study delves into the dynamic landscape of leadership within the sector, particularly during crises, employing the lenses of complexity theory and institutional theory to unravel the intricate mechanisms at play. Through in-depth interviews with 25 various levels of leaders in the UK private healthcare sector, this research explores how leaders in UK private healthcare organizations navigate complex and often chaotic environments, shedding light on their adaptive strategies and decision-making processes during crises. Complexity theory is used to analyze the complicated, volatile nature of healthcare crises, emphasizing the need for adaptive leadership in such contexts. Institutional theory, on the other hand, provides insights into how external and internal institutional pressures influence leadership behavior. Findings from this study highlight the multifaceted nature of leadership adaptability, emphasizing the significance of leaders' abilities to embrace uncertainty, engage in sensemaking, and leverage the institutional environment to enact meaningful changes. Furthermore, this research sheds light on the challenges and opportunities that leaders face when adapting to crises within the UK private healthcare sector. The study's insights contribute to the growing body of literature on leadership in healthcare, offering practical implications for leaders, policymakers, and stakeholders within the UK private healthcare sector. By employing the dual perspectives of complexity theory and institutional theory, this research provides a holistic understanding of leadership adaptability in the face of crises, offering valuable guidance for enhancing the resilience and effectiveness of healthcare leadership within this vital sector.Keywords: leadership, adaptability, decision-making, complexity, complexity theory, institutional theory, organizational complexity, complex adaptive system (CAS), crises, healthcare
Procedia PDF Downloads 5010207 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 23710206 Practice Patterns of Physiotherapists for Learners with Disabilities at Special Schools: A Scoping Review
Authors: Lubisi L. V., Madumo M. B., Mudau N. P., Makhuvele L., Sibuyi M. M.
Abstract:
Background and Aims: Learners with disabilities can be integrated into mainstream schools, whereas there are those learners that are accommodated in special schools based on the support needs they require. These needs, among others, pertain to access to high-intensity therapeutic support by physiotherapists, occupational therapists, and speech therapists. However, access to physiotherapists in low- and middle-income countries is limited, and this creates a knowledge gap in identifying, to the best of our knowledge, best practice patterns aligned with physiotherapy at special schools. This gap compromises the quality of support to be rendered towards strengthening rehabilitation and optimising the participation of learners with disabilities in special schools. The aim of the scoping review was to map the evidence on practice patterns employed by physiotherapists at special schools for learners with disabilities. Methods: The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines were followed. Key terms regarding physiotherapy practice patterns for learners with disabilities at special schools were used to search the literature on the databases. Literature was sourced from Google Scholar, EBSCO, PEDro, PubMed, and Research Gate from 2013 to 2023. A total of 28 articles were initially retrieved and after a process of screening and exclusion, nine articles were included. All the researchers reviewed the articles for eligibility. Articles were initially screened based on the titles, followed by full text. Articles written in English or translated into English mentioned physical / physiotherapy interventions in special schools, both published and unpublished, were included. A qualitative data extraction template was developed and an inductive approach to thematic data analysis was used for included articles to see which themes emerged. Results: Three themes emerged after inductive thematic data analysis. 1. Collaboration with educators, parents, and therapists 2. Family Centred Approach 3. Telehealth. Conclusion: Collaboration is key in delivering therapeutic support to learners with disabilities at special schools. Physiotherapists need to be collaborators at the level of interprofessional and transprofessional. In addition, they need to explore technology to work remotely, especially when learners become absent physically from school.Keywords: learners with disabilities, special school, physiotherapists, therapeutic support
Procedia PDF Downloads 7510205 Validity of Universe Structure Conception as Nested Vortexes
Authors: Khaled M. Nabil
Abstract:
This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.Keywords: astrophysics, cosmology, particles’ structure model, particles’ forces
Procedia PDF Downloads 11910204 Dynamic Determination of Spare Engine Requirements for Air Fighters Integrating Feedback of Operational Information
Authors: Tae Bo Jeon
Abstract:
Korean air force is undertaking a big project to replace prevailing hundreds of old air fighters such as F-4, F-5, KF-16 etc. The task is to develop and produce domestic fighters equipped with 2 complete-type engines each. A large number of engines, however, will be purchased as products from a foreign engine maker. In addition to the fighters themselves, secure the proper number of spare engines serves a significant role in maintaining combat readiness and effectively managing the national defense budget due to high cost. In this paper, we presented a model dynamically updating spare engine requirements. Currently, the military administration purchases all the fighters, engines, and spare engines at acquisition stage and does not have additional procurement processes during the life cycle, 30-40 years. With the assumption that procurement procedure during the operational stage is established, our model starts from the initial estimate of spare engine requirements based on limited information. The model then performs military missions and repair/maintenance works when necessary. During operation, detailed field information - aircraft repair and test, engine repair, planned maintenance, administration time, transportation pipeline between base, field, and depot etc., - should be considered for actual engine requirements. At the end of each year, the performance measure is recorded and proceeds to next year when it shows higher the threshold set. Otherwise, additional engine(s) will be bought and added to the current system. We repeat the process for the life cycle period and compare the results. The proposed model is seen to generate far better results appropriately adding spare engines thus avoiding possible undesirable situations. Our model may well be applied to future air force military operations.Keywords: DMSMS, operational availability, METRIC, PRS
Procedia PDF Downloads 17210203 A New Criterion Using Pose and Shape of Objects for Collision Risk Estimation
Authors: DoHyeung Kim, DaeHee Seo, ByungDoo Kim, ByungGil Lee
Abstract:
As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.Keywords: collision risk, pose, shape, fuzzy logic
Procedia PDF Downloads 52910202 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia
Authors: Q. R. Cheah, Y. F. Tan
Abstract:
In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility
Procedia PDF Downloads 9710201 Effect of Strains and Temperature on the Twinning Behavior of High Purity Titanium Compressed by Split Hopkinson Pressure Bar
Authors: Ping Zhou, Dawu Xiao, Chunli Jiang, Ge Sang
Abstract:
Deformation twinning plays an important role in the mechanical properties of Ti which has high specific strength and excellent corrosion resistance ability. To investigate the twinning behavior of Ti under high strain rate compression, the split Hopkinson pressure bar (SHPB) was adopted to deform samples to different strains at room temperature. In addition, twinning behaviors under varied temperatures of 373K, 573K and 873K were also investigated. The cylindrical-shaped samples with purity 99.995% were annealed at 1073K for 1 hour in vacuum before compression. All the deformation twins were identified by electron backscatter diffraction (EBSD) techniques. The mechanical behavior showed three-stage work hardening in stress-strain curves for samples deformed at temperature 573K and 873K, while only two stages were observed for those deformed at room temperature. For samples compressed at room temperature, the predominant twin types are {10-12}<10-11> (E1), {11-21}<11-26> (E2) and {11-21}<11-23> (C1). The secondary and tertiary twinning was observed inside some E1, E2 and C1 twins. Most of the twin boundaries of E2 acted as the nucleate sites of E1. The densities of twins increase remarkably with increment of strains. For samples compressed at relatively higher temperatures, the migration of twin boundaries of E1, E2 and C1 was observed. All the twin lamellas shorten with temperature, and nearly disappeared at 873K except some remaining E1 twins. Polygonizations of grain boundaries were observed above 573K. The microstructure intended to have a texture with c-axes parallel to compression direction with temperature increment. Factors affecting the dynamic recovery and re-crystallization were discussed.Keywords: deformation twins, EBSD, mechanical behavior, high strain rate, titanium
Procedia PDF Downloads 26110200 Nonlinear Multivariable Analysis of CO2 Emissions in China
Authors: Hsiao-Tien Pao, Yi-Ying Li, Hsin-Chia Fu
Abstract:
This paper addressed the impacts of energy consumption, economic growth, financial development, and population size on environmental degradation using grey relational analysis (GRA) for China, where foreign direct investment (FDI) inflows is the proxy variable for financial development. The more recent historical data during the period 2004–2011 are used, because the use of very old data for data analysis may not be suitable for rapidly developing countries. The results of the GRA indicate that the linkage effects of energy consumption–emissions and GDP–emissions are ranked first and second, respectively. These reveal that energy consumption and economic growth are strongly correlated with emissions. Higher economic growth requires more energy consumption and increasing environmental pollution. Likewise, more efficient energy use needs a higher level of economic development. Therefore, policies to improve energy efficiency and create a low-carbon economy can reduce emissions without hurting economic growth. The finding of FDI–emissions linkage is ranked third. This indicates that China do not apply weak environmental regulations to attract inward FDI. Furthermore, China’s government in attracting inward FDI should strengthen environmental policy. The finding of population–emissions linkage effect is ranked fourth, implying that population size does not directly affect CO2 emissions, even though China has the world’s largest population, and Chinese people are very economical use of energy-related products. Overall, the energy conservation, improving efficiency, managing demand, and financial development, which aim at curtailing waste of energy, reducing both energy consumption and emissions, and without loss of the country’s competitiveness, can be adopted for developing economies. The GRA is one of the best way to use a lower data to build a dynamic analysis model.Keywords: China, CO₂ emissions, foreign direct investment, grey relational analysis
Procedia PDF Downloads 40310199 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 8910198 Regenerating Historic Buildings: Policy Gaps
Authors: Joseph Falzon, Margaret Nelson
Abstract:
Background: Policy makers at European Union (EU) and national levels address the re-use of historic buildings calling for sustainable practices and approaches. Implementation stages of policy are crucial so that EU and national strategic objectives for historic building sustainability are achieved. Governance remains one of the key objectives to ensure resource sustainability. Objective: The aim of the research was to critically examine policies for the regeneration and adaptive re-use of historic buildings in the EU and national level, and to analyse gaps between EU and national legislation and policies, taking Malta as a case study. The impact of policies on regeneration and re-use of historic buildings was also studied. Research Design: Six semi-structured interviews with stakeholders including architects, investors and community representatives informed the research. All interviews were audio recorded and transcribed in the English language. Thematic analysis utilising Atlas.ti was conducted for the semi-structured interviews. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: resources, experiences and governance. Other key issues included identification of gaps in policies, key lessons and quality of regeneration. Abandonment of heritage buildings was discussed, for which main reasons had been attributed to governance related issues both from the policy making perspective as well as the attitudes of certain officials representing the authorities. The role of authorities, co-ordination between government entities, fairness in decision making, enforcement and management brought high criticism from stakeholders along with time factors due to the lengthy procedures taken by authorities. Policies presented an array from different perspectives of same stakeholder groups. Rather than policy, it is the interpretation of policy that presented certain gaps. Interpretations depend highly on the stakeholders putting forward certain arguments. All stakeholders acknowledged the value of heritage in regeneration. Conclusion: Active stakeholder involvement is essential in policy framework development. Research informed policies and streamlining of policies are necessary. National authorities need to shift from a segmented approach to a holistic approach.Keywords: adaptive re-use, historic buildings, policy, sustainable
Procedia PDF Downloads 39310197 Antibacterial Activity of Calendula officinalis Extract Loaded Chitosan Nanoparticles
Authors: Sanjay Singh, Swati Jaiswal, Prashant Mishra
Abstract:
Nanoparticle based formulations of drug delivery systems have shown their potential in improving the performance of existing drugs and have opened avenues for new therapies. Calendula extract is a low cost, wide spectrum bioactive material that has been used for a long term therapy of various infections. Aim: The aim of this study was to develop Calendula officinalis extract based nanoformulations and to study the antibacterial activity of either Calendula extract loaded chitosan nanoparticles or Calendula extract coated silver nanoparticles for increased bioavailability and their long term effect. Methods: Chitosan nanoparticles were prepared by the process of ionotropic gelation, based on interaction between the negative groups of tri polyphosphate (TPP) and positively charged amino groups of chitosan. The size of the Calendula extract-loaded chitosan particles was determined using dynamic light scattering and scanning electron microscopy. Antibacterial activities of these formulations were determined based on minimum inhibitory concentration and time kill studies. In addition, silver nanoparticles were also synthesized in the presence of Calendula extract and characterized by UV visible spectrum, DLS and XRD. Experiments were conducted on 96-plates against two Gram-positive bacteria; Staphylococcus aureus and Bacillus subtilis two Gram-negative bacteria; Escherichia coli and Pseudomonas aeruginosa. Results: Results demonstrated time dependent antibacterial activity against different microbes studied. Both Calendula extract and Calendula extract loaded chitosan nanoparticles have shown good antimicrobial activity against both Gram positive and Gram negative bacteria. Conclusion: Calendula extract loaded chitosan nanoparticles and calendula extract coated silver nanoparticles are potential antibacterial for their long term antibacterial effects.Keywords: antibacterial, Calendula extract, chitosan nanoparticles, silver nanoparticles
Procedia PDF Downloads 34510196 Experience of the Formation of Professional Competence of Students of IT-Specialties
Authors: B. I. Zhumagaliyev, L. Sh. Balgabayeva, G. S. Nabiyeva, B. A. Tulegenova, P. Oralkhan, B. S. Kalenova, S. S. Akhmetov
Abstract:
The article describes an approach to build competence in research of Bachelor and Master, which is now an important feature of modern specialist in the field of engineering. Provides an example of methodical teaching methods with the research aspect, is including the formulation of the problem, the method of conducting experiments, analysis of the results. Implementation of methods allows the student to better consolidate their knowledge and skills at the same time to get research. Knowledge on the part of the media requires some training in the subject area and teaching methods.Keywords: professional competence, model of it-specialties, teaching methods, educational technology, decision making
Procedia PDF Downloads 43710195 Co-Operation in Hungarian Agriculture
Authors: Eszter Hamza
Abstract:
The competitiveness of economic operators is based on interoperability, which is relatively low in Hungary. The development of co-operation is high priority in Common Agricultural Policy 2014-2020. The aim of the paper to assess co-operations in Hungarian agriculture, estimate the economic outputs and benefits of co-operations, based on statistical data processing and literature. Further objective is to explore the potential of agricultural co-operation with the help of interviews and questionnaire survey. The research seeks to answer questions as to what fundamental factors play role in the development of co-operation, and what are the motivations of the actors and the key success factors and pitfalls. The results were analysed using econometric methods. In Hungarian agriculture we can find several forms of co-operation: cooperatives, producer groups (PG) and producer organizations (PO), machinery cooperatives, integrator companies, product boards and interbranch organisations. Despite the several appearance of the agricultural co-operation, their economic weight is significantly lower in Hungary than in western European countries. Considering the agricultural importance, the integrator companies represent the most weight among the co-operations forms. Hungarian farmers linked to co-operations or organizations mostly in relation to procurement and sales. Less than 30 percent of surveyed farmers are members of a producer organization or cooperative. The trust level is low among farmers. The main obstacle to the development of formalized co-operation, is producers' risk aversion and the black economy in agriculture. Producers often prefer informal co-operation instead of long-term contractual relationships. The Hungarian agricultural co-operations are characterized by non-dynamic development, but slow qualitative change. For the future, one breakout point could be the association of producer groups and organizations, which in addition to the benefits of market concentration, in the dissemination of knowledge, advisory network operation and innovation can act more effectively.Keywords: agriculture, co-operation, producer organisation, trust level
Procedia PDF Downloads 39410194 Hidden Oscillations in the Mathematical Model of the Optical Binary Phase Shift Keying (BPSK) Costas Loop
Authors: N. V. Kuznetsov, O. A. Kuznetsova, G. A. Leonov, M. V. Yuldashev, R. V. Yuldashev
Abstract:
Nonlinear analysis of the phase locked loop (PLL)-based circuits is a challenging task. Thus, the simulation is widely used for their study. In this work, we consider a mathematical model of the optical Costas loop and demonstrate the limitations of simulation approach related to the existence of so-called hidden oscillations in the phase space of the model.Keywords: optical Costas loop, mathematical model, simulation, hidden oscillation
Procedia PDF Downloads 44010193 Exploring the Role of Media Activity Theory as a Conceptual Basis for Advancing Journalism Education: A Comprehensive Analysis of Its Impact on News Production and Consumption in the Digital Age
Authors: Shohnaza Uzokova Beknazarovna
Abstract:
This research study provides a comprehensive exploration of the Theory of Media Activity and its relevance as a conceptual framework for journalism education. The author offers a thorough review of existing literature on media activity theory, emphasizing its potential to enhance the understanding of the evolving media landscape and its implications for journalism practice. Through a combination of theoretical analysis and practical examples, the paper elucidates the ways in which the Theory of Media Activity can inform and enrich journalism education, particularly in relation to the interactive and participatory nature of contemporary media. The author presents a compelling argument for the integration of media activity theory into journalism curricula, emphasizing its capacity to equip students with a nuanced understanding of the reciprocal relationship between media producers and consumers. Furthermore, the paper discusses the implications of technological advancements on media production and consumption, highlighting the need for journalism educators to prepare students to navigate and contribute to the future of journalism in a rapidly changing media environment. Overall, this research paper offers valuable insights into the potential benefits of embracing the Theory of Media Activity as a foundational framework for journalism education. Its thorough analysis and practical implications make it a valuable resource for educators, researchers, and practitioners seeking to enhance journalism pedagogy in response to the dynamic nature of contemporary media.Keywords: theory of media activity, journalism education, media landscape, media production, media consumption, interactive media, participatory media, technological advancements, media producers, media consumers, journalism practice, contemporary media environment, journalism pedagogy, media theory, media studies
Procedia PDF Downloads 4710192 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 14710191 A Framework Based Blockchain for the Development of a Social Economy Platform
Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal
Abstract:
Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding
Procedia PDF Downloads 7710190 Damage Identification in Reinforced Concrete Beams Using Modal Parameters and Their Formulation
Authors: Ali Al-Ghalib, Fouad Mohammad
Abstract:
The identification of damage in reinforced concrete structures subjected to incremental cracking performance exploiting vibration data is recognized as a challenging topic in the published and heavily cited literature. Therefore, this paper attempts to shine light on the extent of dynamic methods when applied to reinforced concrete beams simulated with various scenarios of defects. For this purpose, three different reinforced concrete beams are tested through the course of the study. The three beams are loaded statically to failure in incremental successive load cycles and later rehabilitated. After each static load stage, the beams are tested under free-free support condition using experimental modal analysis. The beams were all of the same length and cross-sectional area (2.0x0.14x0.09)m, but they were different in concrete compressive strength and the type of damage presented. The experimental modal parameters as damage identification parameters were showed computationally expensive, time consuming and require substantial inputs and considerable expertise. Nonetheless, they were proved plausible for the condition monitoring of the current case study as well as structural changes in the course of progressive loads. It was accentuated that a satisfactory localization and quantification for structural changes (Level 2 and Level 3 of damage identification problem) can only be achieved reasonably through considering frequencies and mode shapes of a system in a proper analytical model. A convenient post analysis process for various datasets of vibration measurements for the three beams is conducted in order to extract, check and correlate the basic modal parameters; namely, natural frequency, modal damping and mode shapes. The results of the extracted modal parameters and their combination are utilized and discussed in this research as quantification parameters.Keywords: experimental modal analysis, damage identification, structural health monitoring, reinforced concrete beam
Procedia PDF Downloads 26310189 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India
Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn
Abstract:
Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter
Procedia PDF Downloads 17410188 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League
Authors: E. Gouya Bozorg
Abstract:
The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.Keywords: efficiency of a team, ranking, special table, theoretical mathematic
Procedia PDF Downloads 41810187 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test
Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston
Abstract:
The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.Keywords: biomarker, diagnostic, neurology, TBI
Procedia PDF Downloads 6610186 Model Averaging for Poisson Regression
Authors: Zhou Jianhong
Abstract:
Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics
Procedia PDF Downloads 520