Search results for: performance management in the public sector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26260

Search results for: performance management in the public sector

1450 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 90
1449 Enhancing Air Quality: Investigating Filter Lifespan and Byproducts in Air Purification Solutions

Authors: Freja Rydahl Rasmussen, Naja Villadsen, Stig Koust

Abstract:

Air purifiers have become widely implemented in a wide range of settings, including households, schools, institutions, and hospitals, as they tackle the pressing issue of indoor air pollution. With their ability to enhance indoor air quality and create healthier environments, air purifiers are particularly vital when ventilation options are limited. These devices incorporate a diverse array of technologies, including HEPA filters, active carbon filters, UV-C light, photocatalytic oxidation, and ionizers, each designed to combat specific pollutants and improve air quality within enclosed spaces. However, the safety of air purifiers has not been investigated thoroughly, and many questions still arise when applying them. Certain air purification technologies, such as UV-C light or ionization, can unintentionally generate undesirable byproducts that can negatively affect indoor air quality and health. It is well-established that these technologies can inadvertently generate nanoparticles or convert common gaseous compounds into harmful ones, thus exacerbating air pollution. However, the formation of byproducts can vary across products, necessitating further investigation. There is a particular concern about the formation of the carcinogenic substance formaldehyde from common gases like acetone. Many air purifiers use mechanical filtration to remove particles, dust, and pollen from the air. Filters need to be replaced periodically for optimal efficiency, resulting in an additional cost for end-users. Currently, there are no guidelines for filter lifespan, and replacement recommendations solely rely on manufacturers. A market screening revealed that manufacturers' recommended lifespans vary greatly (from 1 month to 10 years), and there is a need for general recommendations to guide consumers. Activated carbon filters are used to adsorb various types of chemicals that can pose health risks or cause unwanted odors. These filters have a certain capacity before becoming saturated. If not replaced in a timely manner, the adsorbed substances are likely to be released from the filter through off-gassing or losing adsorption efficiency. The goal of this study is to investigate the lifespan of filters as well as investigate the potentially harmful effects of air purifiers. Understanding the lifespan of filters used in air purifiers and the potential formation of harmful byproducts is essential for ensuring their optimal performance, guiding consumers in their purchasing decisions, and establishing industry standards for safer and more effective air purification solutions. At this time, a selection of air purifiers has been chosen, and test methods have been established. In the following 3 months, the tests will be conducted, and the results will be ready for presentation later.

Keywords: air purifiers, activated carbon filters, byproducts, clean air, indoor air quality

Procedia PDF Downloads 65
1448 The Role of Parental Stress and Emotion Regulation in Responding to Children’s Expression of Negative Emotion

Authors: Lizel Bertie, Kim Johnston

Abstract:

Parental emotion regulation plays a central role in the socialisation of emotion, especially when teaching young children to cope with negative emotions. Despite evidence which shows non-supportive parental responses to children’s expression of negative emotions has implications for the social and emotional development of the child, few studies have investigated risk factors which impact parental emotion socialisation processes. The current study aimed to explore the extent to which parental stress contributes to both difficulties in parental emotion regulation and non-supportive parental responses to children’s expression of negative emotions. In addition, the study examined whether parental use of expressive suppression as an emotion regulation strategy facilitates the influence of parental stress on non-supportive responses by testing the relations in a mediation model. A sample of 140 Australian adults, who identified as parents with children aged 5 to 10 years, completed an online questionnaire. The measures explored recent symptoms of depression, anxiety, and stress, the use of expressive suppression as an emotion regulation strategy, and hypothetical parental responses to scenarios related to children’s expression of negative emotions. A mediated regression indicated that parents who reported higher levels of stress also reported higher levels of expressive suppression as an emotion regulation strategy and increased use of non-supportive responses in relation to young children’s expression of negative emotions. These findings suggest that parents who experience heightened symptoms of stress are more likely to both suppress their emotions in parent-child interaction and engage in non-supportive responses. Furthermore, higher use of expressive suppression strongly predicted the use of non-supportive responses, despite the presence of parental stress. Contrary to expectation, no indirect effect of stress on non-supportive responses was observed via expressive suppression. The findings from the study suggest that parental stress may become a more salient manifestation of psychological distress in a sub-clinical population of parents while contributing to impaired parental responses. As such, the study offers support for targeting overarching factors such as difficulties in parental emotion regulation and stress management, not only as an intervention for parental psychological distress, but also the detection and prevention of maladaptive parenting practices.

Keywords: emotion regulation, emotion socialisation, expressive suppression, non-supportive responses, parental stress

Procedia PDF Downloads 156
1447 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange

Authors: Fatemeh Rouhi, Hadi Nassiri

Abstract:

Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.

Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences

Procedia PDF Downloads 315
1446 Risk Factors Affecting Construction Project Cost in Oman

Authors: Omar Amoudi, Latifa Al Brashdi

Abstract:

Construction projects are always subject to risks and uncertainties due to its unique and dynamic nature, outdoor work environment, the wide range of skills employed, various parties involved in addition to situation of construction business environment at large. Altogether, these risks and uncertainties affect projects objectives and lead to cost overruns, delay, and poor quality. Construction projects in Oman often experience cost overruns and delay. Managing these risks and reducing their impacts on construction cost requires firstly identifying these risks, and then analyzing their severity on project cost to obtain deep understanding about these risks. This in turn will assist construction managers in managing and tacking these risks. This paper aims to investigate the main risk factors that affect construction projects cost in the Sultanate of Oman. In order to achieve the main aim, literature review was carried out to identify the main risk factors affecting construction cost. Thirty-three risk factors were identified from the literature. Then, a questionnaire survey was designed and distributed among construction professionals (i.e., client, contractor and consultant) to obtain their opinion toward the probability of occurrence for each risk factor and its possible impact on construction project cost. The collected data was analyzed based on qualitative aspects and in several ways. The severity of each risk factor was obtained by multiplying the probability occurrence of a risk factor with its impact. The findings of this study reveal that the most significant risk factors that have high severity impact on construction project cost are: Change of Oil Price, Delay of Materials and Equipment Delivery, Changes in Laws and Regulations, Improper Budgeting, and Contingencies, Lack of Skilled Workforce and Personnel, Delays Caused by Contractor, Delays of Owner Payments, Delays Caused by Client, and Funding Risk. The results can be used as a basis for construction managers to make informed decisions and produce risk response procedures and strategies to tackle these risks and reduce their negative impacts on construction project cost.

Keywords: construction cost, construction projects, Oman, risk factors, risk management

Procedia PDF Downloads 332
1445 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 26
1444 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 224
1443 The Application of Raman Spectroscopy in Olive Oil Analysis

Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli

Abstract:

Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.

Keywords: authentication, chemometrics, olive oil, raman spectroscopy

Procedia PDF Downloads 328
1442 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 230
1441 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 169
1440 Assessment of Groundwater Aquifer Impact from Artificial Lagoons and the Reuse of Wastewater in Qatar

Authors: H. Aljabiry, L. Bailey, S. Young

Abstract:

Qatar is a desert with an average temperature 37⁰C, reaching over 40⁰C during summer. Precipitation is uncommon and mostly in winter. Qatar depends on desalination for drinking water and on groundwater and recycled water for irrigation. Water consumption and network leakage per capita in Qatar are amongst the highest in the world; re-use of treated wastewater is extremely limited with only 14% of treated wastewater being used for irrigation. This has led to the country disposing of unwanted water from various sources in lagoons situated around the country, causing concern over the possibility of environmental pollution. Accordingly, our hypothesis underpinning this research is that the quality and quantity of water in lagoons is having an impact on the groundwater reservoirs in Qatar. Lagoons (n = 14) and wells (n = 55) were sampled for both summer and winter in 2018 (summer and winter). Water, adjoining soil and plant samples were analysed for multiple elements by Inductively Coupled Plasma Mass Spectrometry. Organic and inorganic carbon were measured (CN analyser) and the major anions were determined by ion chromatography. Salinization in both the lagoon and the wells was seen with good correlations between Cl⁻, Na⁺, Li, SO₄, S, Sr, Ca, Ti (p-value < 0.05). Association of heavy metals was observed of Ni, Cu, Ag, and V, Cr, Mo, Cd which is due to contamination from anthropological activities such as wastewater disposal or spread of contaminated dust. However, looking at each elements none of them exceeds the Qatari regulation. Moreover, gypsum saturation in the system was observed in both the lagoon and wells water samples. Lagoons and the water of the well are found to be of a saline type as well as Ca²⁺, Cl⁻, SO₄²⁻ type evidencing both gypsum dissolution and salinization in the system. Moreover, Maps produced by Inverse distance weighting showed an increasing level of Nitrate in the groundwater in winter, and decrease chloride and sulphate level, indicating recharge effect after winter rain events. While E. coli and faecal bacteria were found in most of the lagoons, biological analysis for wells needs to be conducted to understand the biological contamination from lagoon water infiltration. As a conclusion, while both the lagoon and the well showed the same results, more sampling is needed to understand the impact of the lagoons on the groundwater.

Keywords: groundwater quality, lagoon, treated wastewater, water management, wastewater treatment, wetlands

Procedia PDF Downloads 129
1439 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico

Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez

Abstract:

White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.

Keywords: Chirostoma estor estor, computational system, lab view, white fish

Procedia PDF Downloads 320
1438 Developing Wearable EMG Sensor Designed for Parkinson's Disease (PD) Monitoring, and Treatment

Authors: Bulcha Belay Etana

Abstract:

Electromyography is used to measure the electrical activity of muscles for various health monitoring applications using surface electrodes or needle electrodes. Recent developments in electromyogram signal acquisition using textile electrodes open the door for wearable health monitoring which enables patients to monitor and control their health issues outside of traditional healthcare facilities. The aim of this research is therefore to develop and analyze wearable textile electrodes for the acquisition of electromyography signals for Parkinson’s patients and apply an appropriate thermal stimulus to relieve muscle cramping. In order to achieve this, textile electrodes are sewn with a silver-coated thread in an overlapping zigzag pattern into an inextensible fabric, and stainless steel knitted textile electrodes attached to a sleeve were prepared and its electrical characteristics including signal to noise ratio were compared with traditional electrodes. To relieve muscle cramping, a heating element using stainless steel conductive yarn Sewn onto a cotton fabric, coupled with a vibration system were developed. The system was integrated using a microcontroller and a Myoware muscle sensor so that when muscle cramping occurs, measured by the system activates the heating elements and vibration motors. The optimum temperature considered for treatment was 35.50c, so a Temperature measurement system was incorporated to deactivate the heating system when the temperature reaches this threshold, and the signals indicating muscle cramping have subsided. The textile electrode exhibited a signal to noise ratio of 6.38dB while the signal to noise ratio of the traditional electrode was 7.05dB. The rise time of the developed heating element was about 6 minutes to reach the optimum temperature using a 9volt power supply. The treatment of muscle cramping in Parkinson's patients using heat and muscle vibration simultaneously with a wearable electromyography signal acquisition system will improve patients’ livelihoods and enable better chronic pain management.

Keywords: electromyography, heating textile, vibration therapy, parkinson’s disease, wearable electronic textile

Procedia PDF Downloads 129
1437 Quantitative Analysis of Three Sustainability Pillars for Water Tradeoff Projects in Amazon

Authors: Taha Anjamrooz, Sareh Rajabi, Hasan Mahmmud, Ghassan Abulebdeh

Abstract:

Water availability, as well as water demand, are not uniformly distributed in time and space. Numerous extra-large water diversion projects are launched in Amazon to alleviate water scarcities. This research utilizes statistical analysis to examine the temporal and spatial features of 40 extra-large water diversion projects in Amazon. Using a network analysis method, the correlation between seven major basins is measured, while the impact analysis method is employed to explore the associated economic, environmental, and social impacts. The study unearths that the development of water diversion in Amazon has witnessed four stages, from a preliminary or initial period to a phase of rapid development. It is observed that the length of water diversion channels and the quantity of water transferred have amplified significantly in the past five decades. As of 2015, in Amazon, more than 75 billion m³ of water was transferred amidst 12,000 km long channels. These projects extend over half of the Amazon Area. The River Basin E is currently the most significant source of transferred water. Through inter-basin water diversions, Amazon gains the opportunity to enhance the Gross Domestic Product (GDP) by 5%. Nevertheless, the construction costs exceed 70 billion US dollars, which is higher than any other country. The average cost of transferred water per unit has amplified with time and scale but reduced from western to eastern Amazon. Additionally, annual total energy consumption for pumping exceeded 40 billion kilowatt-hours, while the associated greenhouse gas emissions are assessed to be 35 million tons. Noteworthy to comprehend that ecological problems initiated by water diversion influence the River Basin B and River Basin D. Due to water diversion, more than 350 thousand individuals have been relocated, away from their homes. In order to enhance water diversion sustainability, four categories of innovative measures are provided for decision-makers: development of water tradeoff projects strategies, improvement of integrated water resource management, the formation of water-saving inducements, and pricing approach, and application of ex-post assessment.

Keywords: sustainability, water trade-off projects, environment, Amazon

Procedia PDF Downloads 127
1436 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory

Authors: Diom Loreen Ndum, Omarine Njimanted

Abstract:

With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.

Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules

Procedia PDF Downloads 70
1435 Ultrasonic Studies of Polyurea Elastomer Composites with Inorganic Nanoparticles

Authors: V. Samulionis, J. Banys, A. Sánchez-Ferrer

Abstract:

Inorganic nanoparticles are used for fabrication of various composites based on polymer materials because they exhibit a good homogeneity and solubility of the composite material. Multifunctional materials based on composites of a polymer containing inorganic nanotubes are expected to have a great impact on industrial applications in the future. An emerging family of such composites are polyurea elastomers with inorganic MoS2 nanotubes or MoSI nanowires. Polyurea elastomers are a new kind of materials with higher performance than polyurethanes. The improvement of mechanical, chemical and thermal properties is due to the presence of hydrogen bonds between the urea motives which can be erased at high temperature softening the elastomeric network. Such materials are the combination of amorphous polymers above glass transition and crosslinkers which keep the chains into a single macromolecule. Polyurea exhibits a phase separated structure with rigid urea domains (hard domains) embedded in a matrix of flexible polymer chains (soft domains). The elastic properties of polyurea can be tuned over a broad range by varying the molecular weight of the components, the relative amount of hard and soft domains, and concentration of nanoparticles. Ultrasonic methods as non-destructive techniques can be used for elastomer composites characterization. In this manner, we have studied the temperature dependencies of the longitudinal ultrasonic velocity and ultrasonic attenuation of these new polyurea elastomers and composites with inorganic nanoparticles. It was shown that in these polyurea elastomers large ultrasonic attenuation peak and corresponding velocity dispersion exists at 10 MHz frequency below room temperature and this behaviour is related to glass transition Tg of the soft segments in the polymer matrix. The relaxation parameters and Tg depend on the segmental molecular weight of the polymer chains between crosslinking points, the nature of the crosslinkers in the network and content of MoS2 nanotubes or MoSI nanowires. The increase of ultrasonic velocity in composites modified by nanoparticles has been observed, showing the reinforcement of the elastomer. In semicrystalline polyurea elastomer matrices, above glass transition, the first order phase transition from quasi-crystalline to the amorphous state has been observed. In this case, the sharp ultrasonic velocity and attenuation anomalies were observed near the transition temperature TC. Ultrasonic attenuation maximum related to glass transition was reduced in quasicrystalline polyureas indicating less influence of soft domains below TC. The first order phase transition in semicrystalline polyurea elastomer samples has large temperature hysteresis (> 10 K). The impact of inorganic MoS2 nanotubes resulted in the decrease of the first order phase transition temperature in semicrystalline composites.

Keywords: inorganic nanotubes, polyurea elastomer composites, ultrasonic velocity, ultrasonic attenuation

Procedia PDF Downloads 297
1434 The Effect of Relocating a Red Deer Stag on the Size of Its Home Range and Activity

Authors: Erika Csanyi, Gyula Sandor

Abstract:

In the course of the examination, we sought to answer the question of how and to what extent the home range and daily activity of a deer stag relocated from its habitual surroundings changes. We conducted the examination in two hunting areas in Hungary, about 50 km from one another. The control area was in the north of Somogy County, while the sample area was an area of similar features in terms of forest cover, tree stock, agricultural structure, altitude above sea level, climate, etc. in the south of Somogy County. Three middle-aged red deer stags were captured with rocket nets, immobilized and marked with GPS-Plus Collars manufactured by Vectronic Aerospace Gesellschaft mit beschränkter Haftung. One captured species was relocated. We monitored deer movements over 24-hour periods at 3 months. In the course of the examination, we analysed the behaviour of the relocated species and those that remained in their original habitat, as well as the temporal evolution of their behaviour. We examined the characteristics of the marked species’ daily activities and the hourly distance they covered. We intended to find out the difference between the behaviour of the species remaining in their original habitat and of those relocated to a more distant, but similar habitat. In summary, based on our findings, it can be established that such enforced relocations to a different habitat (e.g., game relocation) significantly increases the home range of the species in the months following relocation. Home ranges were calculated using the full data set and the minimum convex polygon (MCP) method. Relocation did not increase the nocturnal and diurnal movement activity of the animal in question. Our research found that the home range of the relocated species proved to be significantly higher than that of those species that were not relocated. The results have been presented in tabular form and have also been displayed on a map. Based on the results, it can be established that relocation inherently includes the risk of falling victim to poaching, vehicle collision. It was only in the third month following relocation that the home range of the relocated species subsided to the level of those species that were not relocated. It is advisable to take these observations into consideration in relocating red deer for nature conservation or game management purposes.

Keywords: Cervus elaphus, home range, relocation, red deer stag

Procedia PDF Downloads 132
1433 Physical, Chemical and Environmental Properties of Natural and Construction/Demolition Recycled Aggregates

Authors: Débora C. Mendes, Matthias Eckert, Cláudia S. Moço, Hélio Martins, Jean-Pierre P. Gonçalves, Miguel Oliveira, José P. Da Silva

Abstract:

Uncontrolled disposal of construction and demolition waste (C & DW) in embankments in the periphery of cities causes both environmental and social problems, namely erosion, deforestation, water contamination and human conflicts. One of the milestones of EU Horizon 2020 Programme is the management of waste as a resource. To achieve this purpose for C & DW, a detailed analysis of the properties of these materials should be done. In this work we report the physical, chemical and environmental properties of C & DW aggregates from 25 different origins. The results are compared with those of common natural aggregates used in construction. Assays were performed according to European Standards. Additional analysis of heavy metals and organic compounds such as polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs), were performed to evaluate their environmental impact. Finally, properties of concrete prepared with C & DW aggregates are also reported. Physical analyses of C & DW aggregates indicated lower quality properties than natural aggregates, particularly for concrete preparation and unbound layers of road pavements. Chemical properties showed that most samples (80%) meet the values required by European regulations for concrete and unbound layers of road pavements. Analyses of heavy metals Cd, Cr, Cu, Pb, Ni, Mo and Zn in the C&DW leachates showed levels below the limits established by the Council Decision of 19 December 2002. Identification and quantification of PCBs and PAHs indicated that few samples shows the presence of these compounds. The measured levels of PCBs and PAHs are also below the limits. Other compounds identified in the C&DW leachates include phthalates and diphenylmethanol. In conclusion, the characterized C&DW aggregates show lower quality properties than natural aggregates but most samples showed to be environmentally safe. A continuous monitoring of the presence of heavy metals and organic compounds should be made to trial safe C&DW aggregates. C&DW aggregates provide a good economic and environmental alternative to natural aggregates.

Keywords: concrete preparation, construction and demolition waste, heavy metals, organic pollutants

Procedia PDF Downloads 343
1432 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures

Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar

Abstract:

In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.

Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization

Procedia PDF Downloads 200
1431 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 120
1430 Regenerating Historic Buildings: Policy Gaps

Authors: Joseph Falzon, Margaret Nelson

Abstract:

Background: Policy makers at European Union (EU) and national levels address the re-use of historic buildings calling for sustainable practices and approaches. Implementation stages of policy are crucial so that EU and national strategic objectives for historic building sustainability are achieved. Governance remains one of the key objectives to ensure resource sustainability. Objective: The aim of the research was to critically examine policies for the regeneration and adaptive re-use of historic buildings in the EU and national level, and to analyse gaps between EU and national legislation and policies, taking Malta as a case study. The impact of policies on regeneration and re-use of historic buildings was also studied. Research Design: Six semi-structured interviews with stakeholders including architects, investors and community representatives informed the research. All interviews were audio recorded and transcribed in the English language. Thematic analysis utilising Atlas.ti was conducted for the semi-structured interviews. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: resources, experiences and governance. Other key issues included identification of gaps in policies, key lessons and quality of regeneration. Abandonment of heritage buildings was discussed, for which main reasons had been attributed to governance related issues both from the policy making perspective as well as the attitudes of certain officials representing the authorities. The role of authorities, co-ordination between government entities, fairness in decision making, enforcement and management brought high criticism from stakeholders along with time factors due to the lengthy procedures taken by authorities. Policies presented an array from different perspectives of same stakeholder groups. Rather than policy, it is the interpretation of policy that presented certain gaps. Interpretations depend highly on the stakeholders putting forward certain arguments. All stakeholders acknowledged the value of heritage in regeneration. Conclusion: Active stakeholder involvement is essential in policy framework development. Research informed policies and streamlining of policies are necessary. National authorities need to shift from a segmented approach to a holistic approach.

Keywords: adaptive re-use, historic buildings, policy, sustainable

Procedia PDF Downloads 385
1429 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 73
1428 Biological Control of Fusarium Crown and Root and Tomato (Solanum lycopersicum L.) Growth Promotion Using Endophytic Fungi from Withania somnifera L.

Authors: Nefzi Ahlem, Aydi Ben Abdallah Rania, Jabnoun-Khiareddine Hayfa, Ammar Nawaim, Mejda Daami-Remadi

Abstract:

Fusarium Crown and Root Rot (FCRR) caused by Fusarium oxysporum f. sp. radicis-lycopersici (FORL) is a serious tomato (Solanum lycopersicum L.) disease in Tunisia. Its management is very difficult due to the long survival of its resting structures and to the luck of genetic resistance. In this work, we explored the wild Solanaceae species Withania somnifera, growing in the Tunisian Centre-East, as a potential source of biocontrol agents effective in FCRR suppression and tomato growth promotion. Seven fungal isolates were shown able to colonize tomato roots, crowns, and stems. Used as conidial suspensions or cell-free culture filtrates, all tested fungal treatments significantly enhanced tomato growth parameters by 21.5-90.3% over FORL-free control and by 27.6-93.5% over pathogen-inoculated control. All treatments significantly decreased the leaf and root damage index by 28.5-92.8 and the vascular browning extent 9.7-86.4% over FORL-inoculated and untreated control. The highest disease suppression ability (decrease by 86.4-92.8% in FCRR severity) over pathogen-inoculated control and by 81.3-88.8 over hymexazol-treated control) was expressed by I6 based treatments. This endophytic fungus was morphologically characterized and identified using rDNA sequencing gene as Fusarium sp. I6 (MG835371). This fungus was shown able to reduce FORL radial growth by 58.5–83.2% using its conidial suspension or cell-free culture filtrate. Fusarium sp. I6 showed chitinolytic, proteolytic and amylase activities. The current study clearly demonstrated that Fusarium sp. (I6) is a promising biocontrol candidate for suppressing FCRR severity and promoting tomato growth. Further investigations are required for elucidating its mechanism of action involved in disease suppression and plant growth promotion.

Keywords: antifungal activity, associated fungi, Fusarium oxysporum f. sp. radicis-lycopersici, Withania somnifera, tomato growth

Procedia PDF Downloads 141
1427 A Framework Based Blockchain for the Development of a Social Economy Platform

Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal

Abstract:

Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.

Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding

Procedia PDF Downloads 71
1426 A Rare Case of Synchronous Colon Adenocarcinoma

Authors: Mohamed Shafi Bin Mahboob Ali

Abstract:

Introduction: Synchronous tumor is defined as the presence of more than one primary malignant lesion in the same patient at the indexed diagnosis. It is a rare occurrence, especially in the spectrum of colorectal cancer, which accounts for less than 4%. The underlying pathology of a synchronous tumor is thought to be due to a genomic factor, which is microsatellite instability (MIS) with the involvement of BRAF, KRAS, and the GSRM1 gene. There are no specific sites of occurrence for the synchronous colorectal tumor, but many studies have shown that a synchronous tumor has about 43% predominance in the ascending colon with rarity in the sigmoid colon. Case Report: We reported a case of a young lady in the middle of her 30's with no family history of colorectal cancer that was diagnosed with a synchronous adenocarcinoma at the descending colon and rectosigmoid region. The lady's presentation was quite perplexing as she presented to the district hospital initially with simple, uncomplicated hemorrhoids and constipation. She was then referred to our center for further management as she developed a 'football' sized right gluteal swelling with a complete intestinal obstruction and bilateral lower-limb paralysis. We performed a CT scan and biopsy of the lesion, which found that the tumor engulfed the sacrococcygeal region with more than one primary lesion in the colon as well as secondaries in the liver. The patient was operated on after a multidisciplinary meeting was held. Pelvic exenteration with tumor debulking and anterior resection were performed. Postoperatively, she was referred to the oncology team for chemotherapy. She had a tremendous recovery in eight months' time with a partial regain of her lower limb power. The patient is still under our follow-up with an improved quality of life post-intervention. Discussion: Synchronous colon cancer is rare, with an incidence of 2.4% to 12.4%. It has male predominance and is pathologically more advanced compared to a single colon lesion. Down staging the disease by means of chemoradiotherapy has shown to be effective in managing this tumor. It is seen commonly on the right colon, but in our case, we found it on the left colon and the rectosigmoid. Conclusion: Managing a synchronous colon tumor could be challenging to surgeons, especially in deciding the extent of resection and postoperative functional outcomes of the bowel; thus, individual treatment strategies are needed to tackle this pathology.

Keywords: synchronous, colon, tumor, adenocarcinoma

Procedia PDF Downloads 102
1425 Co-pyrolysis of Sludge and Kaolin/Zeolite to Stabilize Heavy Metals

Authors: Qian Li, Zhaoping Zhong

Abstract:

Sewage sludge, a typical solid waste, has inevitably been produced in enormous quantities in China. Still worse, the amount of sewage sludge produced has been increasing due to rapid economic development and urbanization. Compared to the conventional method to treat sewage sludge, pyrolysis has been considered an economic and ecological technology because it can significantly reduce the sludge volume, completely kill pathogens, and produce valuable solid, gas, and liquid products. However, the large-scale utilization of sludge biochar has been limited due to the considerable risk posed by heavy metals in the sludge. Heavy metals enriched in pyrolytic biochar could be divided into exchangeable, reducible, oxidizable, and residual forms. The residual form of heavy metals is the most stable and cannot be used by organisms. Kaolin and zeolite are environmentally friendly inorganic minerals with a high surface area and heat resistance characteristics. So, they exhibit the enormous potential to immobilize heavy metals. In order to reduce the risk of leaching heavy metals in the pyrolysis biochar, this study pyrolyzed sewage sludge mixed with kaolin/zeolite in a small rotary kiln. The influences of additives and pyrolysis temperature on the leaching concentration and morphological transformation of heavy metals in pyrolysis biochar were investigated. The potential mechanism of stabilizing heavy metals in the co-pyrolysis of sludge blended with kaolin/zeolite was explained by scanning electron microscopy, X-ray diffraction, and specific surface area and porosity analysis. The European Community Bureau of Reference sequential extraction procedure has been applied to analyze the forms of heavy metals in sludge and pyrolysis biochar. All the concentrations of heavy metals were examined by flame atomic absorption spectrophotometry. Compared with the proportions of heavy metals associated with the F4 fraction in pyrolytic carbon prepared without additional agents, those in carbon obtained by co-pyrolysis of sludge and kaolin/zeolite increased. Increasing the additive dosage could improve the proportions of the stable fraction of various heavy metals in biochar. Kaolin exhibited a better effect on stabilizing heavy metals than zeolite. Aluminosilicate additives with excellent adsorption performance could capture more released heavy metals during sludge pyrolysis. Then heavy metal ions would react with the oxygen ions of additives to form silicate and aluminate, causing the conversion of heavy metals from unstable fractions (sulfate, chloride, etc.) to stable fractions (silicate, aluminate, etc.). This study reveals that the efficiency of stabilizing heavy metals depends on the formation of stable mineral compounds containing heavy metals in pyrolysis biochar.

Keywords: co-pyrolysis, heavy metals, immobilization mechanism, sewage sludge

Procedia PDF Downloads 62
1424 Therapeutic Efficacy of Clompanus Pubescens Leaves Fractions via Downregulation of Neuronal Cholinesterases/NA⁺-K⁺ ATPase/IL-1 β and Improving the Neurocognitive and Antioxidants Status of Streptozotocin-Induced Diabetic Rats

Authors: Amos Sunday Onikanni, Bashir Lawal, Babatunji Emmanuel Oyinloye, Gomaa Mostafa-Hedeab, Mohammed Alorabi, Simona Cavalu, Augustine O. Olusola, Chih-Hao Wang, Gaber El-Saber Batiha

Abstract:

The increasing global burden of diabetes mellitus has called for the search for a therapeutic alternative that offers better activities and safety than conventional chemotherapy. Herein, we evaluated the neuroprotective and antioxidant properties of different fractions (ethyl acetate, N-butanol and residual aqueous) of Clompanus pubescens leaves in streptozotocin (STZ)-induced diabetic rats. Our results revealed a significant elevation in the levels of blood glucose, pro-inflammatory cytokines, lipid peroxidation, neuronal activities of acetylcholinesterase, butyrylcholinesterase, nitric oxide, epinephrine, norepinephrine, and Na+/K+-ATPase in diabetic non treated rats. In addition, decreased levels of enzymatic and non-enzymatic antioxidants were observed. Treatment with different fractions of C. pubescens leaves resulted in a significant reversal of the biochemical alteration and improved the neurocognitive deficit in STZ-induced diabetic rats. However, the ethyl-acetate fraction demonstrated higher activities than the other fractions and was characterized for its phytoconstituents, revealing the presence of Gallic acid (713.00 ppm), catechin (0.91 ppm), ferulic acid (0.98 ppm), rutin (59.82 ppm), quercetin (3.22 ppm) and kaempferol (4.07 ppm). Our molecular docking analysis revealed that these compounds exhibited different binding affinities and potentials for targeting BChE/AChE/ IL-1 β/Na+-K+-ATPase. However, only Kampferol and ferulic exhibited good drug-like, ADMET, and permeability properties suitable for use as a neuronal drug target agent. Hence, the ethyl-acetate fraction of C. pubescent leaves could be considered a source of promising bioactive metabolite for the treatment and management of cognitive impairments related to type II diabetes mellitus.

Keywords: diabetes mellitus, neuroprotective, antioxidant, pro-inflammatory cytokines

Procedia PDF Downloads 109
1423 Application and Utility of the Rale Score for Assessment of Clinical Severity in Covid-19 Patients

Authors: Naridchaya Aberdour, Joanna Kao, Anne Miller, Timothy Shore, Richard Maher, Zhixin Liu

Abstract:

Background: COVID-19 has and continues to be a strain on healthcare globally, with the number of patients requiring hospitalization exceeding the level of medical support available in many countries. As chest x-rays are the primary respiratory radiological investigation, the Radiological Assessment of Lung Edema (RALE) score was used to quantify the extent of pulmonary infection on baseline imaging. Assessment of RALE score's reproducibility and associations with clinical outcome parameters were then evaluated to determine implications for patient management and prognosis. Methods: A retrospective study was performed with the inclusion of patients testing positive for COVID-19 on nasopharyngeal swab within a single Local Health District in Sydney, Australia and baseline x-ray imaging acquired between January to June 2020. Two independent Radiologists viewed the studies and calculated the RALE scores. Clinical outcome parameters were collected and statistical analysis was performed to assess RALE score reproducibility and possible associations with clinical outcomes. Results: A total of 78 patients met inclusion criteria with the age range of 4 to 91 years old. RALE score concordance between the two independent Radiologists was excellent (interclass correlation coefficient = 0.93, 95% CI = 0.88-0.95, p<0.005). Binomial logistics regression identified a positive correlation with hospital admission (1.87 OR, 95% CI= 1.3-2.6, p<0.005), oxygen requirement (1.48 OR, 95% CI= 1.2-1.8, p<0.005) and invasive ventilation (1.2 OR, 95% CI= 1.0-1.3, p<0.005) for each 1-point increase in RALE score. For each one year increased in age, there was a negative correlation with recovery (0.05 OR, 95% CI= 0.92-1.0, p<0.01). RALE scores above three were positively associated with hospitalization (Youden Index 0.61, sensitivity 0.73, specificity 0.89) and above six were positively associated with ICU admission (Youden Index 0.67, sensitivity 0.91, specificity 0.78). Conclusion: The RALE score can be used as a surrogate to quantify the extent of COVID-19 infection and has an excellent inter-observer agreement. The RALE score could be used to prognosticate and identify patients at high risk of deterioration. Threshold values may also be applied to predict the likelihood of hospital and ICU admission.

Keywords: chest radiography, coronavirus, COVID-19, RALE score

Procedia PDF Downloads 172
1422 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 83
1421 Stochastic Modelling for Mixed Mode Fatigue Delamination Growth of Wind Turbine Composite Blades

Authors: Chi Zhang, Hua-Peng Chen

Abstract:

With the increasingly demanding resources in the word, renewable and clean energy has been considered as an alternative way to replace traditional ones. Thus, one of practical examples for using wind energy is wind turbine, which has gained more attentions in recent research. Like most offshore structures, the blades, which is the most critical components of the wind turbine, will be subjected to millions of loading cycles during service life. To operate safely in marine environments, the blades are typically made from fibre reinforced composite materials to resist fatigue delamination and harsh environment. The fatigue crack development of blades is uncertain because of indeterminate mechanical properties for composite and uncertainties under offshore environment like wave loads, wind loads, and humid environments. There are three main delamination failure modes for composite blades, and the most common failure type in practices is subjected to mixed mode loading, typically a range of opening (mode 1) and shear (mode 2). However, the fatigue crack development for mixed mode cannot be predicted as deterministic values because of various uncertainties in realistic practical situation. Therefore, selecting an effective stochastic model to evaluate the mixed mode behaviour of wind turbine blades is a critical issue. In previous studies, gamma process has been considered as an appropriate stochastic approach, which simulates the stochastic deterioration process to proceed in one direction such as realistic situation for fatigue damage failure of wind turbine blades. On the basis of existing studies, various Paris Law equations are discussed to simulate the propagation of the fatigue crack growth. This paper develops a Paris model with the stochastic deterioration modelling according to gamma process for predicting fatigue crack performance in design service life. A numerical example of wind turbine composite materials is investigated to predict the mixed mode crack depth by Paris law and the probability of fatigue failure by gamma process. The probability of failure curves under different situations are obtained from the stochastic deterioration model for comparisons. Compared with the results from experiments, the gamma process can take the uncertain values into consideration for crack propagation of mixed mode, and the stochastic deterioration process shows a better agree well with realistic crack process for composite blades. Finally, according to the predicted results from gamma stochastic model, assessment strategies for composite blades are developed to reduce total lifecycle costs and increase resistance for fatigue crack growth.

Keywords: Reinforced fibre composite, Wind turbine blades, Fatigue delamination, Mixed failure mode, Stochastic process.

Procedia PDF Downloads 407