Search results for: optimal input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5051

Search results for: optimal input

731 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate

Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi

Abstract:

Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.

Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI

Procedia PDF Downloads 310
730 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: landsat 8, oligotrophic lake, remote sensing, water quality

Procedia PDF Downloads 389
729 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 280
728 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data

Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding

Abstract:

The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.

Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)

Procedia PDF Downloads 145
727 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 129
726 Epidemiological Analysis of the Patients Supplied with Foot Orthoses in Ortho-Prosthetic Center of Kosovo

Authors: Ardiana Murtezani, Ilirijana Dallku, Teuta Osmani Vllasolli, Sabit Sllamniku

Abstract:

Background: The use of foot orthoses are always indicated when there are alterations of the optimal biomechanics' position of the foot. Orthotics are very effective and very suitable for the majority of patients with pain due to overload which can be related to biomechanical disorders. Aim: To assess the frequency of patients requiring foot orthoses, type of orthoses and analysis of their disease leading to the use of foot orthoses. Material and Methods: Our study included 128 patients with various foot pathologies, treated at the outpatient department of the Ortho-Prosthetic Center of Kosovo (OPCK) in Prishtina. Prospective-descriptive clinical method was used during this study. Functional status of patients was examined, and the following parameters are noted: range of motion measurements for the affected joints/lower extremities, manual test for muscular strength below the knee and foot of the affected extremity, perimeter measurements of the lower extremities, measurements of lower extremities, foot length measurement, foot width measurements and size. In order to complete the measurements the following instruments are used: plantogram, pedogram, meter and cork shoe lift appliances. Results: The majority of subjects in this study are male (60.2% vs. 39.8%), and the dominant age group was 0-9 (47.7%), 61 subjects respectively. Most frequent foot disorders were: congenital disease 60.1%, trauma cases 13.3%, consequences from rheumatologic disease 12.5%, neurologic dysfunctions 11.7%, and the less frequented are the infectious cases 1.6%. Congenital anomalies were the most frequent cases, and from this group majority of cases suffered from pes planovalgus (37.5%), eqinovarus (15.6%) and discrepancies between extremities (6.3%). Furthermore, traumatic amputations (2.3%) and arthritis (0.8%). As far as neurologic disease, subjects with cerebral palsy are represented with (3.1%), peroneal nerve palsy (2.3%) and hemiparesis (1.6%). Infectious disease osteomyelitis sequels are represented with (1.6%). Conclusion: Based on our study results, we have concluded that the use of foot orthoses for patients suffering from rheumatoid arthritis and nonspecific arthropaty was effective treatment choice, leading to decrease of pain, less deformities and improves the quality of life.

Keywords: orthoses, epidemiological analysis, rheumatoid arthritis, rehabilitation

Procedia PDF Downloads 224
725 Hand Motion Tracking as a Human Computer Interation for People with Cerebral Palsy

Authors: Ana Teixeira, Joao Orvalho

Abstract:

This paper describes experiments using Scratch games, to check the feasibility of employing cerebral palsy users gestures as an alternative of interaction with a computer carried out by students of Master Human Computer Interaction (HCI) of IPC Coimbra. The main focus of this work is to study the usability of a Web Camera as a motion tracking device to achieve a virtual human-computer interaction used by individuals with CP. An approach for Human-computer Interaction (HCI) is present, where individuals with cerebral palsy react and interact with a scratch game through the use of a webcam as an external interaction device. Motion tracking interaction is an emerging technology that is becoming more useful, effective and affordable. However, it raises new questions from the HCI viewpoint, for example, which environments are most suitable for interaction by users with disabilities. In our case, we put emphasis on the accessibility and usability aspects of such interaction devices to meet the special needs of people with disabilities, and specifically people with CP. Despite the fact that our work has just started, preliminary results show that, in general, computer vision interaction systems are very useful; in some cases, these systems are the only way by which some people can interact with a computer. The purpose of the experiments was to verify two hypothesis: 1) people with cerebral palsy can interact with a computer using their natural gestures, 2) scratch games can be a research tool in experiments with disabled young people. A game in Scratch with three levels is created to be played through the use of a webcam. This device permits the detection of certain key points of the user’s body, which allows to assume the head, arms and specially the hands as the most important aspects of recognition. Tests with 5 individuals of different age and gender were made throughout 3 days through periods of 30 minutes with each participant. For a more extensive and reliable statistical analysis, the number of both participants and repetitions in further investigations should be increased. However, already at this stage of research, it is possible to draw some conclusions. First, and the most important, is that simple scratch games on the computer can be a research tool that allows investigating the interaction with computer performed by young persons with CP using intentional gestures. Measurements performed with the assistance of games are attractive for young disabled users. The second important conclusion is that they are able to play scratch games using their gestures. Therefore, the proposed interaction method is promising for them as a human-computer interface. In the future, we plan to include the development of multimodal interfaces that combine various computer vision devices with other input devices improvements in the existing systems to accommodate more the special needs of individuals, in addition, to perform experiments on a larger number of participants.

Keywords: motion tracking, cerebral palsy, rehabilitation, HCI

Procedia PDF Downloads 230
724 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique

Procedia PDF Downloads 587
723 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions

Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin

Abstract:

In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.

Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography

Procedia PDF Downloads 263
722 A Study on The Relationship between Building Façade and Solar Energy Utilization Potential in Urban Residential Area in West China

Authors: T. Wen, Y. Liu, J. Wang, W. Zheng, T. Shao

Abstract:

Along with the increasing density of urban population, solar energy potential of building facade in high-density residential areas become a question that needs to be addressed. This paper studies how the solar energy utilization potential of building facades in different locations of a residential areas changes with different building layouts and orientations in Xining, a typical city in west China which possesses large solar radiation resource. Solar energy potential of three typical building layouts of residential areas, which are parallel determinant, gable misalignment, transverse misalignment, are discussed in detail. First of all, through the data collection and statistics of Xining new residential area, the most representative building parameters are extracted, including building layout, building height, building layers, and building shape. Secondly, according to the results of building parameters extraction, a general model is established and analyzed with rhinoceros 6.0 and its own plug-in grasshopper. Finally, results of the various simulations and data analyses are presented in a visualized way. The results show that there are great differences in the solar energy potential of building facades in different locations of residential areas under three typical building layouts. Generally speaking, the solar energy potential of the west peripheral location is the largest, followed by the East peripheral location, and the middle location is the smallest. When the deflection angle is the same, the solar energy potential shows the result that the West deflection is greater than the East deflection. In addition, the optimal building azimuth range under these three typical building layouts is obtained. Within this range, the solar energy potential of the residential area can always maintain a high level. Beyond this range, the solar energy potential drops sharply. Finally, it is found that when the solar energy potential is maximum, the deflection angle is not positive south, but 5 °or 15°south by west. The results of this study can provide decision analysis basis for residential design of Xining city to improve solar energy utilization potential and provide a reference for solar energy utilization design of urban residential buildings in other similar areas.

Keywords: building facade, solar energy potential, solar radiation, urban residential area, visualization, Xining city

Procedia PDF Downloads 172
721 Miniature Fast Steering Mirrors for Space Optical Communication on NanoSats and CubeSats

Authors: Sylvain Chardon, Timotéo Payre, Hugo Grardel, Yann Quentel, Mathieu Thomachot, Gérald Aigouy, Frank Claeyssen

Abstract:

With the increasing digitalization of society, access to data has become vital and strategic for individuals and nations. In this context, the number of satellite constellation projects is growing drastically worldwide and is a next-generation challenge of the New Space industry. So far, existing satellite constellations have been using radio frequencies (RF) for satellite-to-ground communications, inter-satellite communications, and feeder link communication. However, RF has several limitations, such as limited bandwidth and low protection level. To address these limitations, space optical communication will be the new trend, addressing both very high-speed and secured encrypted communication. Fast Steering Mirrors (FSM) are key components used in optical communication as well as space imagery and for a large field of functions such as Point Ahead Mechanisms (PAM), Raster Scanning, Beam Steering Mirrors (BSM), Fine Pointing Mechanisms (FPM) and Line of Sight stabilization (LOS). The main challenges of space FSM development for optical communication are to propose both a technology and a supply chain relevant for high quantities New Space approach, which requires secured connectivity for high-speed internet, Earth planet observation and monitoring, and mobility applications. CTEC proposes a mini-FSM technology offering a stroke of +/-6 mrad and a resonant frequency of 1700 Hz, with a mass of 50 gr. This FSM mechanism is a good candidate for giant constellations and all applications on board NanoSats and CubeSats, featuring a very high level of miniaturization and optimized for New Space high quantities cost efficiency. The use of piezo actuators offers a high resonance frequency for optimal control, with almost zero power consumption in step and stay pointing, and with very high-reliability figures > 0,995 demonstrated over years of recurrent manufacturing for Optronics applications at CTEC.

Keywords: fast steering mirror, feeder link, line of sight stabilization, optical communication, pointing ahead mechanism, raster scan

Procedia PDF Downloads 72
720 Playwriting in a German Language Class: How Creativity in a Language Lesson Supports Learning and the Acquisition of Political Agency

Authors: Ioannis Souris

Abstract:

In this paper, we would like to present how we taught German through playwriting and analyze the usefulness of this method for teaching languages and cultivating a sense of political agency in students and teachers alike. Last academic year, we worked at the German Saturday School in Greenwich, London. This school offers Saturday German lessons to children whose parents are German, living in London. The lessons are two hours long, and the children’s level of German varies according to how often or how much German is spoken at home or how often the families visit Germany (as well as other factors which will be discussed in more detail in the paper). The directors of the school provide teachers with learning material and course books, but they strongly encourage individual input on lesson structure and methods of teaching German. The class we taught consisted of six eight-to-nine-year-olds. Midway into the academic year, we ran out of teaching material, and we, therefore, decided to write a play. In the paper, we would like to explore the process we followed in creating or writing this play and how this encouraged the children to collaborate and exercise their skills in writing, storytelling, speaking, and opinion-sharing. We want to examine the impact this project had on the children who wrote and performed the play, the wider community of the Saturday school, and the development of our language teaching practice. We found, for instance, that some students, who were quiet or shy, became very open and outspoken in the process of writing and performing the play. They took the initiative and led the process, putting us, their teachers, in the role of simple observers or facilitators. When we showed the play in front of the school, the other children and teachers, as audience members, also became part of the process as they commented on the plot, language, and characters and gave feedback on further development. In the paper, we will discuss how this teaching project fits into recent developments in the research of creativity and the teaching of languages and how engagement with creative approaches to teaching has the potential to question and subvert traditional notions of ‘lesson’, ‘teacher’, and ‘student’. From the moment a questioning of norms takes place, we inadvertently raise questions about politics, agency, and resistance. We will conclude the paper with a definition of what we mean by ‘political agency’ within the context of our teaching project and education, in general, and why inspiring creativity and imagination within teaching can be considered a political act. Finally, our aim in this paper will be to propose the possibility of analyzing teaching languages through creativity and political agency theories.

Keywords: innovation in language teaching and learning, language acquisition and learning, language curriculum development, language education

Procedia PDF Downloads 78
719 Optimization of a High-Growth Investment Portfolio for the South African Market Using Predictive Analytics

Authors: Mia Françoise

Abstract:

This report aims to develop a strategy for assisting short-term investors to benefit from the current economic climate in South Africa by utilizing technical analysis techniques and predictive analytics. As part of this research, value investing and technical analysis principles will be combined to maximize returns for South African investors while optimizing volatility. As an emerging market, South Africa offers many opportunities for high growth in sectors where other developed countries cannot grow at the same rate. Investing in South African companies with significant growth potential can be extremely rewarding. Although the risk involved is more significant in countries with less developed markets and infrastructure, there is more room for growth in these countries. According to recent research, the offshore market is expected to outperform the local market over the long term; however, short-term investments in the local market will likely be more profitable, as the Johannesburg Stock Exchange is predicted to outperform the S&P500 over the short term. The instabilities in the economy contribute to increased market volatility, which can benefit investors if appropriately utilized. Price prediction and portfolio optimization comprise the two primary components of this methodology. As part of this process, statistics and other predictive modeling techniques will be used to predict the future performance of stocks listed on the Johannesburg Stock Exchange. Following predictive data analysis, Modern Portfolio Theory, based on Markowitz's Mean-Variance Theorem, will be applied to optimize the allocation of assets within an investment portfolio. By combining different assets within an investment portfolio, this optimization method produces a portfolio with an optimal ratio of expected risk to expected return. This methodology aims to provide a short-term investment with a stock portfolio that offers the best risk-to-return profile for stocks listed on the JSE by combining price prediction and portfolio optimization.

Keywords: financial stocks, optimized asset allocation, prediction modelling, South Africa

Procedia PDF Downloads 88
718 Crossing of the Intestinal Barrier Thanks to Targeted Biologics: Nanofitins

Authors: Solene Masloh, Anne Chevrel, Maxime Culot, Leonardo Scapozza, Magali Zeisser-Labouebe

Abstract:

The limited stability of clinically proven therapeutic antibodies limits their administration by the parenteral route. However, oral administration remains the best alternative as it is the most convenient and less invasive one. Obtaining a targeted treatment based on biologics, which can be orally administered, would, therefore, be an ideal situation to improve patient adherence and compliance. Nevertheless, the delivery of macromolecules through the intestine remains challenging because of their sensitivity to the harsh conditions of the gastrointestinal tract and their low permeability across the intestinal mucosa. To address this challenge, this project aims to demonstrate that targeting receptor-mediated endocytosis followed by transcytosis could maximize the intestinal uptake and transport of large molecules, such as Nanofitins. These affinity proteins of 7 kDa with binding properties similar to antibodies have already demonstrated retained stability in the digestive tract and local efficiency. However, their size does not allow passive diffusion through the intestinal barrier. Nanofitins having a controlled affinity for membrane receptors involved in the transcytosis mechanism used naturally for the transport of large molecules in humans were generated. Proteins were expressed using ribosome display and selected based on affinity to the targeted receptor and other characteristics. Their uptake and transport ex vivo across viable porcine intestines were investigated using an Ussing chambers system. In this paper, we will report the results achieved while addressing the different challenges linked to this study. To validate the ex vivo model, first, we proved the presence of the receptors targeted in humans on the porcine intestine. Then, after the identification of an optimal way of detection of Nanofitins, transport experiments were performed on porcine intestines with viability followed during the time of the experiment. The results, showing that the physiological process of transcytosis is capable of being triggered by the binding of Nanofitins on their target, will be reported here. In conclusion, the results show that Nanofitins can be transported across the intestinal barrier by triggering the receptor-mediated transcytosis and that the ex vivo model is an interesting technique to assess biologics absorption through the intestine.

Keywords: ex-vivo, Nanofitins, oral administration, transcytosis

Procedia PDF Downloads 174
717 Cfd Simulation for Urban Environment for Evaluation of a Wind Energy Potential of a Building or a New Urban Planning

Authors: David Serero, Loic Couton, Jean-Denis Parisse, Robert Leroy

Abstract:

This paper presents an analysis method of airflow at the periphery of several typologies of architectural volumes. To understand the complexity of the urban environment on the airflows in the city, we compared three sites at different architectural scale. The research sets a method to identify the optimal location for the installation of wind turbines on the edges of a building and to achieve an improvement in the performance of energy extracted by precise localization of an accelerating wing called “aero foil”. The objective is to define principles for the installation of wind turbines and natural ventilation design of buildings. Instead of theoretical winds analysis, we combined numerical aeraulic simulations using STAR CCM + software with wind data, over long periods of time (greater than 1 year). If airflows computer fluid analysis (CFD) simulation of buildings are current, we have calibrated a virtual wind tunnel with wind data using in situ anemometers (to establish localized cartography of urban winds). We can then develop a complete volumetric model of the behavior of the wind on a roof area, or an entire urban island. With this method, we can categorize: - the different types of wind in urban areas and identify the minimum and maximum wind spectrum, - select the type of harvesting devices - fixing to the roof of a building, - the altimetry of the device in relation to the levels of the roofs - The potential nuisances around. This study is carried out from the recovery of a geolocated data flow, and the connection of this information with the technical specifications of wind turbines, their energy performance and their speed of engagement. Thanks to this method, we can thus define the characteristics of wind turbines to maximize their performance in urban sites and in a turbulent airflow regime. We also study the installation of a wind accelerator associated with buildings. The “aerofoils which are integrated are improvement to control the speed of the air, to orientate it on the wind turbine, to accelerate it and to hide, thanks to its profile, the device on the roof of the building.

Keywords: wind energy harvesting, wind turbine selection, urban wind potential analysis, CFD simulation for architectural design

Procedia PDF Downloads 143
716 Assessing the Quality of Maternity Care in Sub-Saharan Africa Using the Donabedian Quality of Care Framework: A Systematic Scoping Review

Authors: Bernice Boafoaa Gyapong, Anne Jones, Sam Bassett, Janet Anderson

Abstract:

Background: Maternal mortality and morbidity are global concerns, especially in sub-Saharan Africa (SSA). Most maternal mortalities occur at the time of birth. Quality intrapartum care is essential for improving maternal and newborn health outcomes. This scoping review aimed to assess and describe the quality of care during childbirth in SSA to provide an overview of the regional trend of the quality of intrapartum care, the challenges to quality care provision, and identify research gaps. Methods: A scoping review based on Arksey and O’Malley’s scoping review framework was conducted. Medline, CINAHL, PsycINFO, and maternal-infant databases were searched to identify the relevant studies for this review. A narrative summary was presented using themes based on the Donabedian structure, process, and outcome quality of care model. Results: A total of five hundred and forty-seven (547) publications were identified. Fifty-six (56) studies conducted in twenty (20) countries were included in the review. Thirty-four (34) were quantitative, sixteen (16) were qualitative, and six (6) were mixed methods. Most of the studies were related to the process component of quality of care. The provision of emergency obstetric care services, infrastructure, and availability of essential staff and equipment for perinatal care was inadequate in many facilities, particularly rural and peripheral health facilities. Many women experienced disrespectful care during childbirth. Routine care during labour and delivery was observed to be sub-optimal, yet some women reported high satisfaction with care. The use of health facilities for delivery was lower in health centres compared to hospitals. Conclusion: There are variations in the quality of maternity care provided in SSA. Intrapartum care quality is generally deficient in SSA, particularly in peripheral health facilities, health centres, and community clinics. Many of the quality-of-care issues identified are related to the structure component. Stakeholders must develop interventions that comprehensively address these interrelated issues to improve maternal healthcare quality, especially in primary healthcare facilities.

Keywords: quality of care, maternity health, Sub-Saharan Africa, intrapartum

Procedia PDF Downloads 59
715 Performance and Specific Emissions of an SI Engine Using Anhydrous Ethanol–Gasoline Blends in the City of Bogota

Authors: Alexander García Mariaca, Rodrigo Morillo Castaño, Juan Rolón Ríos

Abstract:

The government of Colombia has promoted the use of biofuels in the last 20 years through laws and resolutions, which regulate their use, with the objective to improve the atmospheric air quality and to promote Colombian agricultural industry. However, despite the use of blends of biofuels with fossil fuels, the air quality in large cities does not get better, this deterioration in the air is mainly caused by mobile sources that working with spark ignition internal combustion engines (SI-ICE), operating with a mixture in volume of 90 % gasoline and 10 % ethanol called E10, that for the case of Bogota represent 84 % of the fleet. Another problem is that Colombia has big cities located above 2200 masl and there are no accurate studies on the impact that the E10 mixture could cause in the emissions and performance of SI-ICE. This study aims to establish the optimal blend between gasoline ethanol in which an SI engine operates more efficiently in urban centres located at 2600 masl. The test was developed on SI engine four-stroke, single cylinder, naturally aspirated and with carburettor for the fuel supply using blends of gasoline and anhydrous ethanol in different ratios E10, E15, E20, E40, E60, E85 and E100. These tests were conducted in the city of Bogota, which is located at 2600 masl, with the engine operating at 3600 rpm and at 25, 50, 75 and 100% of load. The results show that the performance variables as engine brake torque, brake power and brake thermal efficiency decrease, while brake specific fuel consumption increases with the rise in the percentage of ethanol in the mixture. On the other hand, the specific emissions of CO2 and NOx present increases while specific emissions of CO and HC decreases compared to those produced by gasoline. From the tests, it is concluded that the SI-ICE worked more efficiently with the E40 mixture, where was obtained an increases of the brake power of 8.81 % and a reduction on brake specific fuel consumption of 2.5 %, coupled with a reduction in the specific emissions of CO2, HC and CO in 9.72, 52.88 and 76.66 % respectively compared to the results obtained with the E10 blend. This behaviour is because the E40 mixture provides the appropriate amount of the oxygen for the combustion process, which leads to better utilization of available energy in this process, thus generating a comparable power output to the E10 mixing and producing lower emissions CO and HC with the other test blends. Nevertheless, the emission of NOx increases in 106.25 %.

Keywords: emissions, ethanol, gasoline, engine, performance

Procedia PDF Downloads 320
714 Ethanol Precipitation and Characterization of L-Asparaginase from Aspergillus oryzae

Authors: L. L. Tundisi, A. Pessoa Jr., E. B. Tambourgi, E. Silveira, P. G. Mazzola

Abstract:

L-asparaginase (L-ASNase) is the gold standard treatment for acute lymphoblastic leukemia that mainly affects pediatric patients; treatment increases survival from 20% to 90%. The characterization of other L-Asparaginases, apart from the most used from Escherichia coli and Erwinia chrysanthemi, has been reported, but the choice of the most appropriate is still under debate. This choice should be based on its pharmacokinetics, immune hypersensitivity, doses, prices, pharmacodynamics. The main factors influencing the antileukemic activity of ASNase are enzymatic activity, Km, glutaminase activity, clearance of the enzyme and development of resistance. However, most of the commercialized enzyme present an intrinsic glutaminase activity, which is responsible for some side effects. In this study, glutaminase free asparaginase produced from Aspergillus oryzae was precipitated in different percentages of ethanol (0–80%), until optimum ethanol concentration of 60% (w/w) was found. Following, precipitation of crude L-ASNase was performed in a single step, using 60% (w/w) ethanol, under constant agitation and temperature. It presented activity of 135.45 U/mg and after gel filtration chromatography with Sephadex G-the enzymatic activity was 322.02 U/mg. The apparent molecular mass of the purified L-ASNase fraction was estimated by 10% SDS-PAGE. Proteins were stained with Coomassie Brilliant Blue R-250. The molar mass range was from 10 kDa to 250 kDa. L-ASNase from Aspergillus oryzae was characterized aiming possible therapeutic use. Four different buffers (phosphate-citrate buffer pH 2.6 to 5.8; phosphate buffer pH 5.8 to 7.4; Tris - HCl pH 7.4 to 9.0; and carbonate buffer pH 9.8 to 10.6) were used to measure the optimum pH for L-ASNase activity. The optimum temperature for enzyme activity was measured at optimal pH conditions (Tris-HCl and phosphate buffer, pH 7.4) at different temperatures ranging from 5 to 55°C. All activities were calculated by quantifying the free ammonia, using the Nessler reagent. The kinetic parameters calculation, e.g. Michaelis-Menten constant (Km), maximum velocity (Vmax) and Hills coefficient (n), were performed by incubating the enzyme in different concentrations of the substrate at optimum conditions of pH and fitted on Hill’s equation. This glutaminase free asparaginase showed a low Km (3.39 mM and 3.81 mM) and enzymatic activity of 135.45 U/mg after precipitation with ethanol. After gel filtration chromatography it rose to 322.02 U/mg. Optimum activity was found between pH 5.8 - 9.0, best activity results with phosphate buffer pH 7.4 and Tris-HCl pH 7.4 and showed activity from 5°C to 55°C. These results indicate that L-ASNase from A. oryzae has the potential for human use.

Keywords: biopharmaceuticals, bioprocessing, bioproducts, biotechnology, enzyme activity, ethanol precipitation

Procedia PDF Downloads 285
713 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities

Authors: Riofrio Jonathan, Fernandez Guillermo

Abstract:

Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).

Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility

Procedia PDF Downloads 145
712 Using MALDI-TOF MS to Detect Environmental Microplastics (Polyethylene, Polyethylene Terephthalate, and Polystyrene) within a Simulated Tissue Sample

Authors: Kara J. Coffman-Rea, Karen E. Samonds

Abstract:

Microplastic pollution is an urgent global threat to our planet and human health. Microplastic particles have been detected within our food, water, and atmosphere, and found within the human stool, placenta, and lung tissue. However, most spectrometric microplastic detection methods require chemical digestion which can alter or destroy microplastic particles and makes it impossible to acquire information about their in-situ distribution. MALDI TOF MS (Matrix-assisted laser desorption ionization-time of flight mass spectrometry) is an analytical method using a soft ionization technique that can be used for polymer analysis. This method provides a valuable opportunity to both acquire information regarding the in-situ distribution of microplastics and also minimizes the destructive element of chemical digestion. In addition, MALDI TOF MS allows for expanded analysis of the microplastics including detection of specific additives that may be present within them. MALDI TOF MS is particularly sensitive to sample preparation and has not yet been used to analyze environmental microplastics within their specific location (e.g., biological tissues, sediment, water). In this study, microplastics were created using polyethylene gloves, polystyrene micro-foam, and polyethylene terephthalate cable sleeving. Plastics were frozen using liquid nitrogen and ground to obtain small fragments. An artificial tissue was created using a cellulose sponge as scaffolding coated with a MaxGel Extracellular Matrix to simulate human lung tissue. Optimal preparation techniques (e.g., matrix, cationization reagent, solvent, mixing ratio, laser intensity) were first established for each specific polymer type. The artificial tissue sample was subsequently spiked with microplastics, and specific polymers were detected using MALDI-TOF-MS. This study presents a novel method for the detection of environmental polyethylene, polyethylene terephthalate, and polystyrene microplastics within a complex sample. Results of this study provide an effective method that can be used in future microplastics research and can aid in determining the potential threats to environmental and human health that they pose.

Keywords: environmental plastic pollution, MALDI-TOF MS, microplastics, polymer identification

Procedia PDF Downloads 245
711 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 98
710 Understanding Knowledge, Skills and Competency Needs in Digital Health for Current and Future Health Workforce

Authors: Sisira Edirippulige

Abstract:

Background: Digital health education and training (DHET) is imperative for preparing current and future clinicians to work competently in digitally enabled environments. Despite rapid integration of digital health in modern health services, systematic education and training opportunities for health workers is still lacking. Objectives: This study aimed to investigate healthcare professionals’ perspectives and expectations regarding the knowledge, skills and competency needs in digital health for current and future healthcare workforce. Methods: A qualitative study design with semi-structured individual interviews was employed. A purposive sample method was adopted to collect relevant information from the health workers. Inductive thematic analysis was used to analyse data. Interviews were audio-recorded and transcribed verbatim. Consolidated Criteria for Reporting Qualitative Research (COREQ) was followed when we reported this study. Results: Two themes emerged while analysing the data: (1) what to teach in DHET and (2) how to teach DHET. Overall, healthcare professionals agreed that DHET is important for preparing current and future clinicians for working competently in digitally enabled environments. Knowledge relating to what is digital health, types of digital health, use of technology and human factors in digital health were considered as important to be taught in DHET. Skills relating to digital health consultations, clinical information system management and remote monitoring were considered important to be taught. Blended learning which combined e-learning and classroom-based teaching, simulation sessions and clinical rotations were suggested by healthcare professionals as optimal approaches to deliver the above-mentioned content. Conclusions: This study is the first of its kind to investigate health professionals’ perspectives and expectations relating to the knowledge, skills and competency needs in digital health for current and future healthcare workforce. Healthcare workers are keen to acquire relevant knowledge, skills and competencies related to digital health. Different modes of education delivery is of interest to fit in with busy schedule of health workers.

Keywords: digital health, telehealth, telemedicine, education, curriculum

Procedia PDF Downloads 140
709 An Empirical Investigation of Factors Influencing Construction Project Selection Processes within the Nigeria Public Sector

Authors: Emmanuel U. Unuafe, Oyegoke T. Bukoye, Sandhya Sastry, Yanqing Duan

Abstract:

Globally, there is increasing interest in project management due to a shortage in infrastructure services supply capability. Hence, it is of utmost importance that organisations understand that choosing a particular project over another is an opportunity cost – tying up the organisations resources. In order to devise constructive ways to bring direction, structure, and oversight to the process of project selection has led to the development of tools and techniques by researchers and practitioners. However, despite the development of various frameworks to assist in the appraisal and selection of government projects, failures are still being recorded with government projects. In developing countries, where frameworks are rarely used, the problems are compounded. To improve the situation, this study will investigate the current practice of construction project selection processes within the Nigeria public sector in order to inform theories of decision making from the perspective of developing nations and project management practice. Unlike other research around construction projects in Nigeria this research concentrate on factors influencing the selection process within the Nigeria public sector, which has received limited study. The authors report the findings of semi-structured interviews of top management in the Nigerian public sector and draw conclusions in terms of decision making extant theory and current practice. Preliminary results from the data analysis show that groups make project selection decisions and this forces sub-optimal decisions due to pressure on time, clashes of interest, lack of standardised framework for selecting projects, lack of accountability and poor leadership. Consequently, because decision maker is usually drawn from different fields, religious beliefs, ethnic group and with different languages. The choice of a project by an individual will be greatly influence by experience, political precedence than by realistic investigation as well as his understanding of the desired outcome of the project, in other words, the individual’s ideology and their level of fairness.

Keywords: factors influencing project selection, public sector construction project selection, projects portfolio selection, strategic decision-making

Procedia PDF Downloads 325
708 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 162
707 Acoustic Radiation Pressure Detaches Myoblast from Culture Substrate by Assistance of Serum-Free Medium

Authors: Yuta Kurashina, Chikahiro Imashiro, Kiyoshi Ohnuma, Kenjiro Takemura

Abstract:

Research objectives and goals: To realize clinical applications of regenerative medicine, a mass cell culture is highly required. In a conventional cell culture, trypsinization was employed for cell detachment. However, trypsinization causes proliferation decrease due to injury of cell membrane. In order to detach cells using an enzyme-free method, therefore, this study proposes a novel cell detachment method capable of detaching adherent cells using acoustic radiation pressure exposed to the dish by the assistance of serum-free medium with ITS liquid medium supplement. Methods used In order to generate acoustic radiation pressure, a piezoelectric ceramic plate was glued on a glass plate to configure an ultrasonic transducer. The glass plate and a chamber wall compose a chamber in which a culture dish is placed in glycerol. Glycerol transmits acoustic radiation pressure to adhered cells on the culture dish. To excite a resonance vibration of transducer, AC signal with 29-31 kHz (swept) and 150, 300, and 450 V was input to the transducer for 5 min. As a pretreatment to reduce cell adhesivity, serum-free medium with ITS liquid medium supplement was spread to the culture dish before exposed to acoustic radiation pressure. To evaluate the proposed cell detachment method, C2C12 myoblast cells (8.0 × 104 cells) were cultured on a ø35 culture dish for 48 hr, and then the medium was replaced with the serum-free medium with ITS liquid medium supplement for 24 hr. We replaced the medium with phosphate buffered saline and incubated cells for 10 min. After that, cells were exposed to the acoustic radiation pressure for 5 min. We also collected cells by using trypsinization as control. Cells collected by the proposed method and trypsinization were respectively reseeded in ø60 culture dishes and cultured for 24 hr. Then, the number of proliferated cells was counted. Results achieved: By a phase contrast microscope imaging, shrink of lamellipodia was observed before exposed to acoustic radiation pressure, and no cells remained on the culture dish after the exposed of acoustic radiation pressure. This result suggests that serum-free medium with ITS liquid inhibits adhesivity of cells and acoustic radiation pressure detaches cells from the dish. Moreover, the number of proliferated cells 24 hr after collected by the proposed method with 150 and 300 V is the same or more than that by trypsinization, i.e., cells were proliferated 15% higher with the proposed method using acoustic radiation pressure than with the traditional cell collecting method of trypsinization. These results proved that cells were able to be collected by using the appropriate exposure of acoustic radiation pressure. Conclusions: This study proposed a cell detachment method using acoustic radiation pressure by the assistance of serum-free medium. The proposed method provides an enzyme-free cell detachment method so that it may be used in future clinical applications instead of trypsinization.

Keywords: acoustic radiation pressure, cell detachment, enzyme free, ultrasonic transducer

Procedia PDF Downloads 252
706 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 509
705 The Effect of Different Concentrations of Extracting Solvent on the Polyphenolic Content and Antioxidant Activity of Gynura procumbens Leaves

Authors: Kam Wen Hang, Tan Kee Teng, Huang Poh Ching, Chia Kai Xiang, H. V. Annegowda, H. S. Naveen Kumar

Abstract:

Gynura procumbens (G. procumbens) leaves, commonly known as ‘sambung nyawa’ in Malaysia is a well-known medicinal plant commonly used as folk medicines in controlling blood glucose, cholesterol level as well as treating cancer. These medicinal properties were believed to be related to the polyphenolic content present in G. procumbens extract, therefore optimization of its extraction process is vital to obtain highest possible antioxidant activities. The current study was conducted to investigate the effect of different concentrations of extracting solvent (ethanol) on the amount of polyphenolic content and antioxidant activities of G. procumbens leaf extract. The concentrations of ethanol used were 30-70%, with the temperature and time kept constant at 50°C and 30 minutes, respectively using ultrasound-assisted extraction. The polyphenolic content of these extracts were quantified by Folin-Ciocalteu colorimetric method and results were expressed as milligram gallic acid equivalent (mg GAE)/g. Phosphomolybdenum method and 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assays were used to investigate the antioxidant properties of the extract and the results were expressed as milligram ascorbic acid equivalent (mg AAE)/g and effective concentration (EC50) respectively. Among the three different (30%, 50% and 70%) concentrations of ethanol studied, the 50% ethanolic extract showed total phenolic content of 31.565 ± 0.344 mg GAE/g and total antioxidant activity of 78.839 ± 0.199 mg AAE/g while 30% ethanolic extract showed 29.214 ± 0.645 mg GAE/g and 70.701 ± 1.394 mg AAE/g, respectively. With respect to DPPH radical scavenging assay, 50% ethanolic extract had exhibited slightly lower EC50 (314.3 ± 4.0 μg/ml) values compared to 30% ethanol extract (340.4 ± 5.3 μg/ml). Out of all the tested extracts, 70% ethanolic extract exhibited significantly (p< 0.05) highest total phenolic content (38.000 ± 1.009 mg GAE/g), total antioxidant capacity (95.874 ± 2.422 mg AAE/g) and demonstrated the lowest EC50 in DPPH assay (244.2 ± 5.9 μg/ml). An excellent correlations were drawn between total phenolic content, total antioxidant capacity and DPPH radical scavenging activity (R2 = 0.949 and R2 = 0.978, respectively). It was concluded from this study that, 70% ethanol should be used as the optimal polarity solvent to obtain G. procumbens leaf extract with maximum polyphenolic content with antioxidant properties.

Keywords: antioxidant activity, DPPH assay, Gynura procumbens, phenolic compounds

Procedia PDF Downloads 407
704 Algorithms Inspired from Human Behavior Applied to Optimization of a Complex Process

Authors: S. Curteanu, F. Leon, M. Gavrilescu, S. A. Floria

Abstract:

Optimization algorithms inspired from human behavior were applied in this approach, associated with neural networks models. The algorithms belong to human behaviors of learning and cooperation and human competitive behavior classes. For the first class, the main strategies include: random learning, individual learning, and social learning, and the selected algorithms are: simplified human learning optimization (SHLO), social learning optimization (SLO), and teaching-learning based optimization (TLBO). For the second class, the concept of learning is associated with competitiveness, and the selected algorithms are sports-inspired algorithms (with Football Game Algorithm, FGA and Volleyball Premier League, VPL) and Imperialist Competitive Algorithm (ICA). A real process, the synthesis of polyacrylamide-based multicomponent hydrogels, where some parameters are difficult to obtain experimentally, is considered as a case study. Reaction yield and swelling degree are predicted as a function of reaction conditions (acrylamide concentration, initiator concentration, crosslinking agent concentration, temperature, reaction time, and amount of inclusion polymer, which could be starch, poly(vinyl alcohol) or gelatin). The experimental results contain 175 data. Artificial neural networks are obtained in optimal form with biologically inspired algorithm; the optimization being perform at two level: structural and parametric. Feedforward neural networks with one or two hidden layers and no more than 25 neurons in intermediate layers were obtained with values of correlation coefficient in the validation phase over 0.90. The best results were obtained with TLBO algorithm, correlation coefficient being 0.94 for an MLP(6:9:20:2) – a feedforward neural network with two hidden layers and 9 and 20, respectively, intermediate neurons. Good results obtained prove the efficiency of the optimization algorithms. More than the good results, what is important in this approach is the simulation methodology, including neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is general and has flexibility so that it can be easily adapted to other processes in association with different types of models.

Keywords: artificial neural networks, human behaviors of learning and cooperation, human competitive behavior, optimization algorithms

Procedia PDF Downloads 104
703 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building

Authors: Sreto Boljevic

Abstract:

Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.

Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy

Procedia PDF Downloads 211
702 Evaluation of Different Waste Management Planning Strategies in an Industrial City

Authors: Leila H. Khiabani, Mohammadreza Vafaee, Farshad Hashemzadeh

Abstract:

Industrial waste management regulates different stages of production, storage, transfer, recycling and waste disposal. There are several common practices for industrial waste management. However, due to various local health, economic, social, environmental and aesthetic considerations, the most optimal principles and measures often vary at each specific industrial zone. In addition, waste management strategies are heavily impacted by local administrative, legal, and financial regulations. In this study, a hybrid qualitative and quantitative research methodology has been designed for waste management planning in an industrial city. Firstly, following a qualitative research methodology, the most relevant waste management strategies for the specific industrial city were identified through interviews with environmental planning and waste management experts. Forty experts participated in this study. Alborz industrial city in Iran, which hosts more than one thousand industrial units in nine hundred acres, was chosen as the sample industrial city in this study. The findings from the expert interviews at the first phase were then used to design a quantitative questionnaire for the second phase of the study. The aim of the questionnaire was to quantify the relative impact of different waste management strategies in the sample industrial city. Eight waste management strategies and three implementation policies were included in the questionnaire. The experts were asked to rank the relative effectiveness of each strategy for environmental planning of the sample industrial city. They were also asked to rank the relative effectiveness of each planning policy on each of the waste management strategies. In the end, the weighted average of all the responses was calculated to identify the most effective waste management strategy and planning policies for the sample industrial city. The results suggested that among the eight suggested waste management strategies, industrial composting is the most effective (31%) strategy based on the collective evaluation of the local expert. Additionally, the results suggested that the most effective policy (58%) in the city’s environmental planning is to reduce waste generation by prolonging the effective life of industrial products using higher quality and recyclable materials. These findings can provide useful expert guidelines for prioritization between different waste management strategies in the city’s overall environmental planning roadmap. The findings may also be applicable to similar industrial cities. In addition, a similar methodology can be utilized in the environmental planning of other industrial cities.

Keywords: environmental planning, industrial city, quantitative research, waste management

Procedia PDF Downloads 127