Search results for: finite difference simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10916

Search results for: finite difference simulation

1646 Justice and the Juvenile: Changing Trends and Developments

Authors: Shikhar Shrivastava, Varun Khare

Abstract:

Background: We are confronted by a society that is becoming more complex, more mobile, and more dysfunctional. Teen pregnancy, suicide, elopement, and the perusal of dangerous drugs have become commonplace. In addition, children do not settle their disputes as they once did. Guns and knives are quotidian. Therefore, it has been an exigent to have a "Juvenile Code" that would provide specific substantive and procedural rules for juveniles in the justice system. However, until the twentieth century, there was little difference between how the justice system treated adults and children. Age was considered only in terms of appropriate punishment and juveniles were eligible for the same punishment as adults. Findings: The increased prevalence and legislative support for specialized courts, Juvenile Justice Boards, including juvenile drug, mental health and truancy court programs, as well as diversion programs and evidence-based approaches into the fabric of juvenile justice are just a few examples of recent advances. In India, various measures were taken to prosecute young offenders who committed violent crimes as adults. But it was argued that equating juveniles with adult criminals was neither scientifically correct nor normatively defensible. It would defeat the very purpose of the justice system. Methodology and Conclusion: This paper attempts to bring forth the results of analytical and descriptive research that examined changing trends in juvenile justice legislation. It covers the investigative and inspective practices of police, the various administrative agencies who have roles in implementing the legislation, the courts, and the detention centers. In this paper we shall discuss about how the juvenile justice system is the dumping ground for many of a youths’ problem. The changing notions of justice, from retributive to restorative and rehabilitative shall be discussed. A comparative study of the Juvenile act in India and that of the U.S has been discussed. Specific social institutions and forces that explain juvenile delinquency are identified. In addition, various influences on juvenile delinquency are noted, such as families, schools, peer groups and communities. The text concludes by addressing socialization, deterrence, imprisonments, alternatives, restitution and preventions.

Keywords: juvenile, justice system, retributive, rehabilitative, delinquency

Procedia PDF Downloads 457
1645 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 388
1644 Assessment of the Impacts of Climate Change on Climatic Zones over the Korean Peninsula for Natural Disaster Management Information

Authors: Sejin Jung, Dongho Kang, Byungsik Kim

Abstract:

Assessing the impact of climate change requires the use of a multi-model ensemble (MME) to quantify uncertainties between scenarios and produce downscaled outlines for simulation of climate under the influence of different factors, including topography. This study decreases climate change scenarios from the 13 global climate models (GCMs) to assess the impacts of future climate change. Unlike South Korea, North Korea lacks in studies using climate change scenarios of the CoupledModelIntercomparisonProject (CMIP5), and only recently did the country start the projection of extreme precipitation episodes. One of the main purposes of this study is to predict changes in the average climatic conditions of North Korea in the future. The result of comparing downscaled climate change scenarios with observation data for a reference period indicates high applicability of the Multi-Model Ensemble (MME). Furthermore, the study classifies climatic zones by applying the Köppen-Geiger climate classification system to the MME, which is validated for future precipitation and temperature. The result suggests that the continental climate (D) that covers the inland area for the reference climate is expected to shift into the temperate climate (C). The coefficient of variation (CVs) in the temperature ensemble is particularly low for the southern coast of the Korean peninsula, and accordingly, a high possibility of the shifting climatic zone of the coast is predicted. This research was supported by a grant (MOIS-DP-2015-05) of Disaster Prediction and Mitigation Technology Development Program funded by Ministry of Interior and Safety (MOIS, Korea).

Keywords: MME, North Korea, Koppen–Geiger, climatic zones, coefficient of variation, CV

Procedia PDF Downloads 111
1643 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 361
1642 Genetics, Law and Society: Regulating New Genetic Technologies

Authors: Aisling De Paor

Abstract:

Scientific and technological developments are driving genetics and genetic technologies into the public sphere. Scientists are making genetic discoveries as to the make up of the human body and the cause and effect of disease, diversity and disability amongst individuals. Technological innovation in the field of genetics is also advancing, with the development of genetic testing, and other emerging genetic technologies, including gene editing (which offers the potential for genetic modification). In addition to the benefits for medicine, health care and humanity, these genetic advances raise a range of ethical, legal and societal concerns. From an ethical perspective, such advances may, for example, change the concept of humans and what it means to be human. Science may take over in conceptualising human beings, which may push the boundaries of existing human rights. New genetic technologies, particularly gene editing techniques create the potential to stigmatise disability, by highlighting disability or genetic difference as something that should be eliminated or anticipated. From a disability perspective, use (and misuse) of genetic technologies raise concerns about discrimination and violations to the dignity and integrity of the individual. With an acknowledgement of the likely future orientation of genetic science, and in consideration of the intersection of genetics and disability, this paper highlights the main concerns raised as genetic science and technology advances (particularly with gene editing developments), and the consequences for disability and human rights. Through the use of traditional doctrinal legal methodologies, it investigates the use (and potential misuse) of gene editing as creating the potential for a unique form of discrimination and stigmatization to develop, as well as a potential gateway to a form of new, subtle eugenics. This article highlights the need to maintain caution as to the use, application and the consequences of genetic technologies. With a focus on the law and policy position in Europe, it examines the need to control and regulate these new technologies, particularly gene editing. In addition to considering the need for regulation, this paper highlights non-normative approaches to address this area, including awareness raising and education, public discussion and engagement with key stakeholders in the field and the development of a multifaceted genetics advisory network.

Keywords: disability, gene-editing, genetics, law, regulation

Procedia PDF Downloads 360
1641 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber

Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap

Abstract:

Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.

Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber

Procedia PDF Downloads 164
1640 The Effect of Implant Design on the Height of Inter-Implant Bone Crest: A 10-Year Retrospective Study of the Astra Tech Implant and Branemark Implant

Authors: Daeung Jung

Abstract:

Background: In case of patients with missing teeth, multiple implant restoration has been widely used and is inevitable. To increase its survival rate, it is important to understand the influence of different implant designs on inter-implant crestal bone resorption. There are several implant systems designed to minimize loss of crestal bone, and the Astra Tech and Brånemark Implant are two of them. Aim/Hypothesis: The aim of this 10-year study was to compare the height of inter-implant bone crest in two implant systems; the Astra Tech and the Brånemark implant system. Material and Methods: In this retrospective study, 40 consecutively treated patients were utilized; 23 patients with 30 sites for Astra Tech system and 17 patients with 20 sites for Brånemark system. The implant restoration was comprised of splinted crown in partially edentulous patients. Radiographs were taken immediately after 1st surgery, at impression making, at prosthetics setting, and annually after loading. Lateral distance from implant to bone crest, inter-implant distance was gauged, and crestal bone height was measured from the implant shoulder to the first bone contact. Calibrations were performed with known length of thread pitch distance for vertical measurement, and known diameter of abutment or fixture for horizontal measurement using ImageJ. Results: After 10 years, patients treated with Astra Tech implant system demonstrated less inter-implant crestal bone resorption when implants had a distance of 3mm or less between them. In cases of implants that had a greater than 3 mm distance between them, however, there appeared to be no statistically significant difference in crestal bone loss between two systems. Conclusion and clinical implications: In the situation of partially edentulous patients planning to have more than two implants, the inter-implant distance is one of the most important factors to be considered. If it is impossible to make sure of having sufficient inter-implant distance, the implants with less micro gap in the fixture-abutment junction, less traumatic 2nd surgery approach, and the adequate surface topography would be choice of appropriate options to minimize inter-implant crestal bone resorption.

Keywords: implant design, crestal bone loss, inter-implant distance, 10-year retrospective study

Procedia PDF Downloads 166
1639 Performance Evaluation of Routing Protocol in Cognitive Radio with Multi Technological Environment

Authors: M. Yosra, A. Mohamed, T. Sami

Abstract:

Over the past few years, mobile communication technologies have seen significant evolution. This fact promoted the implementation of many systems in a multi-technological setting. From one system to another, the Quality of Service (QoS) provided to mobile consumers gets better. The growing number of normalized standards extends the available services for each consumer, moreover, most of the available radio frequencies have already been allocated, such as 3G, Wifi, Wimax, and LTE. A study by the Federal Communications Commission (FCC) found that certain frequency bands are partially occupied in particular locations and times. So, the idea of Cognitive Radio (CR) is to share the spectrum between a primary user (PU) and a secondary user (SU). The main objective of this spectrum management is to achieve a maximum rate of exploitation of the radio spectrum. In general, the CR can greatly improve the quality of service (QoS) and improve the reliability of the link. The problem will reside in the possibility of proposing a technique to improve the reliability of the wireless link by using the CR with some routing protocols. However, users declared that the links were unreliable and that it was an incompatibility with QoS. In our case, we choose the QoS parameter "bandwidth" to perform a supervised classification. In this paper, we propose a comparative study between some routing protocols, taking into account the variation of different technologies on the existing spectral bandwidth like 3G, WIFI, WIMAX, and LTE. Due to the simulation results, we observe that LTE has significantly higher availability bandwidth compared with other technologies. The performance of the OLSR protocol is better than other on-demand routing protocols (DSR, AODV and DSDV), in LTE technology because of the proper receiving of packets, less packet drop and the throughput. Numerous simulations of routing protocols have been made using simulators such as NS3.

Keywords: cognitive radio, multi technology, network simulator (NS3), routing protocol

Procedia PDF Downloads 63
1638 First Rank Symptoms in Mania: An Indistinct Diagnostic Strand

Authors: Afshan Channa, Sameeha Aleem, Harim Mohsin

Abstract:

First rank symptoms (FRS) are considered to be pathognomic for Schizophrenia. However, FRS is not a distinctive feature of Schizophrenia. It has also been noticed in affective disorder, albeit not inclusive in diagnostic criteria. The presence of FRS in Mania leads to misdiagnosis of psychotic illness, further complicating the management and delay of appropriate treatment. FRS in Mania is associated with poor clinical and functional outcome. Its existence in the first episode of bipolar disorder may be a predictor of poor short-term outcome and decompensating course of illness. FRS in Mania is studied in west. However, the cultural divergence and detriments make it pertinent to study the frequency of FRS in affective disorder independently in Pakistan. Objective: The frequency of first rank symptoms in manic patients, who were under treatment at psychiatric services of tertiary care hospital. Method: The cross sectional study was done at psychiatric services of Aga Khan University Hospital, Karachi, Pakistan. One hundred and twenty manic patients were recruited from November 2014 to May 2015. The patients who were unable to comprehend Urdu or had comorbid psychiatric or organic disorder were excluded. FRS was assessed by administration of validated Urdu version of Present State Examination (PSE) tool. Result: The mean age of the patients was 37.62 + 12.51. The mean number of previous manic episode was 2.17 + 2.23. 11.2% males and 30.6% females had FRS. This association of first rank symptoms with gender in patients of mania was found to be significant with a p-value of 0.008. All-inclusive, 19.2% exhibited FRS in their course of illness. 43.5% had thought broadcasting, made feeling, impulses, action and somatic passivity. 39.1% had thought insertion, 30.4% had auditory perceptual distortion, and 17.4% had thought withdrawal. However, none displayed delusional perception. Conclusion: The study confirms the presence of FRS in mania in both male and female, irrespective of the duration of current manic illness or previous number of manic episodes. A substantial difference was established between both the genders. Being married had no protective effect on the presence of FRS.

Keywords: first rank symptoms, Mania, psychosis, present state examination

Procedia PDF Downloads 379
1637 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks

Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi

Abstract:

The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.

Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’

Procedia PDF Downloads 371
1636 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 324
1635 The Impact of Urbanisation on Sediment Concentration of Ginzo River in Katsina City, Katsina State, Nigeria

Authors: Ahmed A. Lugard, Mohammed A. Aliyu

Abstract:

This paper studied the influence of urban development and its accompanied land surface transformation on sediment concentration of a natural flowing Ginzo river across the city of Katsina. An opposite twin river known as Tille river, which is less urbanized, was used to compare the result of the sediment concentration of the Ginzo River in order to ascertain the consequences of the urban area on impacting the sediment concentration. An instrument called USP 61 point integrating cable way sampler described by Gregory and walling (1973), was used to collect the suspended sediment samples in the wet season months of June, July, August and September. The result obtained in the study shows that only the sample collected at the peripheral site of the city, which is mostly farmland areas resembles the results in the four sites of Tille river, which is the reference stream in the study. It was found to be only + 10% different from one another, while at the other three sites of the Ginzo which are highly urbanized the disparity ranges from 35-45% less than what are obtained at the four sites of Tille River. In the generalized assessment, the t-distribution result applied to the two set of data shows that there is a significant difference between the sediment concentration of urbanized River Ginzo and that of less urbanized River Tille. The study further discovered that the less sediment concentration found in urbanized River Ginzo is attributed to concretization of surfaced, tarred roads, concretized channeling of segments of the river including the river bed and reserved open grassland areas, all within the catchments. The study therefore concludes that urbanization affect not only the hydrology of an urbanized river basin, but also the sediment concentration which is a significant aspect of its geomorphology. This world certainly affects the flood plain of the basin at a certain point which might be a suitable land for cultivation. It is recommended here that further studies on the impact of urbanization on River Basins should focus on all elements of geomorphology as it has been on hydrology. This would make the work rather complete as the two disciplines are inseparable from each other. The authorities concern should also trigger a more proper environmental and land use management policies to arrest the menace of land degradation and related episodic events.

Keywords: environment, infiltration, river, urbanization

Procedia PDF Downloads 318
1634 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization

Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur

Abstract:

In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.

Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization

Procedia PDF Downloads 445
1633 In vitro α-Amylase and α-Glucosidase Inhibitory Activities of Bitter Melon (Momordica charantia) with Different Stage of Maturity

Authors: P. S. Percin, O. Inanli, S. Karakaya

Abstract:

Bitter melon (Momordica charantia) is a medicinal vegetable, which is used traditionally to remedy diabetes. Bitter melon contains several classes of primary and secondary metabolites. In traditional Turkish medicine bitter melon is used for wound healing and treatment of peptic ulcers. Nowadays, bitter melon is used for the treatment of diabetes and ulcerative colitis in many countries. The main constituents of bitter melon, which are responsible for the anti-diabetic effects, are triterpene, protein, steroid, alkaloid and phenolic compounds. In this study total phenolics, total carotenoids and β-carotene contents of mature and immature bitter melons were determined. In addition, in vitro α-amylase and α-glucosidase activities of mature and immature bitter melons were studied. Total phenolic contents of immature and mature bitter melon were 74 and 123 mg CE/g bitter melon respectively. Although total phenolics of mature bitter melon was higher than that of immature bitter melon, this difference was not found statistically significant (p > 0.05). Carotenoids, a diverse group of more than 600 naturally occurring red, orange and yellow pigments, play important roles in many physiological processes both in plants and humans. The total carotenoid content of mature bitter melon was 4.36 fold higher than the total carotenoid content of immature bitter melon. The compounds that have hypoglycaemic effect of bitter melon are steroidal saponins known as charantin, insulin-like peptides and alkaloids. α-Amylase is one of the main enzymes in human that is responsible for the breakdown of starch to more simple sugars. Therefore, the inhibitors of this enzyme can delay the carbohydrate digestion and reduce the rate of glucose absorption. The immature bitter melon extract showed α-amylase and α-glucosidase inhibitory activities in vitro. α-Amylase inhibitory activity was higher than that of α-glucosidase inhibitory activity when IC50 values were compared. In conclusion, the present results provide evidence that aqueous extract of bitter melon may have an inhibitory effect on carbohydrate breakdown enzymes.

Keywords: bitter melon, in vitro antidiabetic activity, total carotenoids, total phenols

Procedia PDF Downloads 241
1632 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles

Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt

Abstract:

Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.

Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam

Procedia PDF Downloads 187
1631 The Effect of Photovoltaic Integrated Shading Devices on the Energy Performance of Apartment Buildings in a Mediterranean Climate

Authors: Jenan Abu Qadourah

Abstract:

With the depletion of traditional fossil resources and the growing human population, it is now more important than ever to reduce our energy usage and harmful emissions. In the Mediterranean region, the intense solar radiation contributes to summertime overheating, which raises energy costs and building carbon footprints, alternatively making it suitable for the installation of solar energy systems. In urban settings, where multi-story structures predominate and roof space is limited, photovoltaic integrated shading devices (PVSD) are a clean solution for building designers. However, incorporating photovoltaic (PV) systems into a building's envelope is a complex procedure that, if not executed correctly, might result in the PV system failing. As a result, potential PVSD design solutions must be assessed based on their overall energy performance from the project's early design stage. Therefore, this paper aims to investigate and compare the possible impact of various PVSDs on the energy performance of new apartments in the Mediterranean region, with a focus on Amman, Jordan. To achieve the research aim, computer simulations were performed to assess and compare the energy performance of different PVSD configurations. Furthermore, an energy index was developed by taking into account all energy aspects, including the building's primary energy demand and the PVSD systems' net energy production. According to the findings, the PVSD system can meet 12% to 43% of the apartment building's electricity needs. By highlighting the potential interest in PVSD systems, this study aids the building designer in producing more energy-efficient buildings and encourages building owners to install PV systems on the façade of their buildings.

Keywords: photovoltaic integrated shading device, solar energy, architecture, energy performance, simulation, overall energy index, Jordan

Procedia PDF Downloads 84
1630 Change in Self-Reported Personality in Students of Acting

Authors: Nemanja Kidzin, Danka Puric

Abstract:

Recently, the field of personality change has received an increasing amount of attention. Previously under-researched variables, such as the intention to change or taking on new social roles (in a working environment, education, family, etc.), have been shown to be relevant for personality change. Following this line of research, our study aimed to determine whether the process of acting can bring about personality changes in students of acting and, if yes, in which way. We hypothesized that there will be a significant difference between self-reported personality traits of students acting at the beginning and the end of preparing for a role. Additionally, as potential moderator variables, we measured the reported personality traits of the roles the students were acting, as well as empathy, disintegration, and years of formal education. The sample (N = 47) was composed of students of acting from the Faculty of Dramatic Arts (first- to fourth-year) and the Faculty of Modern Arts (first-year students only). Participants' mean age was 20.2 (SD = 1.47), and there were 64% of females. The procedure included two waves of testing (T1 at the beginning and T2 at the end of the semester), and students’ acting exercises and character immersion comprised the pseudo-experimental procedure. Students’ personality traits (HEXACO-60, self-report version), empathy (Questionnaire of Cognitive and Affective Empathy, QCAE), and disintegration (DELTA9, 10-item version) were measured at both T1 and T2, while the personality of the role (HEXACO-60 observer version) was measured at T2. Responses to all instruments were given on a 5-point Likert scale. A series of repeated-measures T-tests showed significant differences in emotionality (t(46) = 2.56, p = 0.014) and conscientiousness (t(46) = -2.39, p = 0.021) between T1 and T2. Moreover, an index of absolute personality change was significantly different from 0 for all traits (range .53 to .34, t(46) = 4.20, p < .001 for the lowest index. The average test-retest correlation for HEXACO traits was 0.57, which is lower than proposed by other similar researches. As for moderator variables, neither the personality of the role nor empathy or disintegration explained the change in students’ personality traits. The magnitude of personality change was the highest in fourth-year students, with no significant differences between the remaining three years of studying. Overall, our results seem to indicate some personality changes in students of acting. However, these changes cannot be unequivocally related to the process of preparing for a role. Further and methodologically stricter research is needed to unravel the role of acting in personality change.

Keywords: theater, personality change, acting, HEXACO

Procedia PDF Downloads 175
1629 Exploring 1,2,4-Triazine-3(2H)-One Derivatives as Anticancer Agents for Breast Cancer: A QSAR, Molecular Docking, ADMET, and Molecular Dynamics

Authors: Said Belaaouad

Abstract:

This study aimed to explore the quantitative structure-activity relationship (QSAR) of 1,2,4-Triazine-3(2H)-one derivative as a potential anticancer agent against breast cancer. The electronic descriptors were obtained using the Density Functional Theory (DFT) method, and a multiple linear regression techniques was employed to construct the QSAR model. The model exhibited favorable statistical parameters, including R2=0.849, R2adj=0.656, MSE=0.056, R2test=0.710, and Q2cv=0.542, indicating its reliability. Among the descriptors analyzed, absolute electronegativity (χ), total energy (TE), number of hydrogen bond donors (NHD), water solubility (LogS), and shape coefficient (I) were identified as influential factors. Furthermore, leveraging the validated QSAR model, new derivatives of 1,2,4-Triazine-3(2H)-one were designed, and their activity and pharmacokinetic properties were estimated. Subsequently, molecular docking (MD) and molecular dynamics (MD) simulations were employed to assess the binding affinity of the designed molecules. The Tubulin colchicine binding site, which plays a crucial role in cancer treatment, was chosen as the target protein. Through the simulation trajectory spanning 100 ns, the binding affinity was calculated using the MMPBSA script. As a result, fourteen novel Tubulin-colchicine inhibitors with promising pharmacokinetic characteristics were identified. Overall, this study provides valuable insights into the QSAR of 1,2,4-Triazine-3(2H)-one derivative as potential anticancer agent, along with the design of new compounds and their assessment through molecular docking and dynamics simulations targeting the Tubulin-colchicine binding site.

Keywords: QSAR, molecular docking, ADMET, 1, 2, 4-triazin-3(2H)-ones, breast cancer, anticancer, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 97
1628 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 232
1627 Comparison of Susceptibility to Measles in Preterm Infants versus Term Infants

Authors: Joseph L. Mathew, Shourjendra N. Banerjee, R. K. Ratho, Sourabh Dutta, Vanita Suri

Abstract:

Background: In India and many other developing countries, a single dose of measles vaccine is administered to infants at 9 months of age. This is based on the assumption that maternal transplacentally transferred antibodies will protect infants until that age. However, our previous data showed that most infants lose maternal anti-measles antibodies before 6 months of age, making them susceptible to measles before vaccination at 9 months. Objective: This prospective study was designed to compare susceptibility in pre-term vs term infants, at different time points. Material and Methods: Following Institutional Ethics Committee approval and a formal informed consent process, venous blood was drawn from a cohort of 45 consecutive term infants and 45 consecutive pre-term infants (both groups delivered by the vaginal route); at birth, 3 months, 6 months and 9 months (prior to measles vaccination). Serum was separated and anti-measles IgG antibody levels were measured by quantitative ELISA kits (with sensitivity and specificity > 95%). Susceptibility to measles was defined as antibody titre < 200mIU/ml. The mean antibody levels were compared between the two groups at the four time points. Results: The mean gestation of term babies was 38.5±1.2 weeks; and pre-term babies 34.7±2.8 weeks. The respective mean birth weights were 2655±215g and 1985±175g. Reliable maternal vaccination record was available in only 7 of the 90 mothers. Mean anti-measles IgG antibody (±SD) in terms babies was 3165±533 IU/ml at birth, 1074±272 IU/ml at 3 months, 314±153 IU/ml at 6 months, and 68±21 IU/ml at 9 months. The corresponding levels in pre-term babies were 2875±612 IU/ml, 948±377 IU/ml, 265±98 IU/ml, and 72±33 IU/ml at 9 months (p > 0.05 for all inter-group comparisons). The proportion of susceptible term infants at birth, 3months, 6months and 9months was 0%, 16%, 67% and 96%. The corresponding proportions in the pre-term infants were 0%, 29%, 82%, and 100% (p > 0.05 for all inter-group comparisons). Conclusion: Majority of infants are susceptible to measles before 9 months of age suggesting the need to anticipate measles vaccination, but there was no statistically significant difference between the proportion of susceptible term and pre-term infants, at any of the four-time points. A larger study is required to confirm these findings and compare sero-protection if vaccination is anticipated to be administered between 6 and 9 months.

Keywords: measles, preterm, susceptibility, term infant

Procedia PDF Downloads 273
1626 Customer Segmentation Revisited: The Case of the E-Tailing Industry in Emerging Market

Authors: Sanjeev Prasher, T. Sai Vijay, Chandan Parsad, Abhishek Banerjee, Sahakari Nikhil Krishna, Subham Chatterjee

Abstract:

With rapid rise in internet retailing, the industry is set for a major implosion. Due to the little difference among competitors, companies find it difficult to segment and target the right shoppers. The objective of the study is to segment Indian online shoppers on the basis of the factors – website characteristics and shopping values. Together, these cover extrinsic and intrinsic factors that affect shoppers as they visit web retailers. Data were collected using questionnaire from 319 Indian online shoppers, and factor analysis was used to confirm the factors influencing the shoppers in their selection of web portals. Thereafter, cluster analysis was applied, and different segments of shoppers were identified. The relationship between income groups and online shoppers’ segments was tracked using correspondence analysis. Significant findings from the study include that web entertainment and informativeness together contribute more than fifty percent of the total influence on the web shoppers. Contrary to general perception that shoppers seek utilitarian leverages, the present study highlights the preference for fun, excitement, and entertainment during browsing of the website. Four segments namely Information Seekers, Utility Seekers, Value Seekers and Core Shoppers were identified and profiled. Value seekers emerged to be the most dominant segment with two-fifth of the respondents falling for hedonic as well as utilitarian shopping values. With overlap among the segments, utilitarian shopping value garnered prominence with more than fifty-eight percent of the total respondents. Moreover, a strong relation has been established between the income levels and the segments of Indian online shoppers. Web shoppers show different motives from being utility seekers to information seekers, core shoppers and finally value seekers as income levels increase. Companies can strategically use this information for target marketing and align their web portals accordingly. This study can further be used to develop models revolving around satisfaction, trust and customer loyalty.

Keywords: online shopping, shopping values, effectiveness of information content, web informativeness, web entertainment, information seekers, utility seekers, value seekers, core shoppers

Procedia PDF Downloads 195
1625 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 156
1624 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams

Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname

Abstract:

Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.

Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams

Procedia PDF Downloads 139
1623 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: alternative conceptions, electric field of continuous charged distributions, inverse square law, levels of student understandings, superposition principle

Procedia PDF Downloads 296
1622 Integration of Hybrid PV-Wind in Three Phase Grid System Using Fuzzy MPPT without Battery Storage for Remote Area

Authors: Thohaku Abdul Hadi, Hadyan Perdana Putra, Nugroho Wicaksono, Adhika Prajna Nandiwardhana, Onang Surya Nugroho, Heri Suryoatmojo, Soedibjo

Abstract:

Access to electricity is now a basic requirement of mankind. Unfortunately, there are still many places around the world which have no access to electricity, such as small islands, where there could potentially be a factory, a plantation, a residential area, or resorts. Many of these places might have substantial potential for energy generation such us Photovoltaic (PV) and Wind turbine (WT), which can be used to generate electricity independently for themselves. Solar energy and wind power are renewable energy sources which are mostly found in nature and also kinds of alternative energy that are still developing in a rapid speed to help and meet the demand of electricity. PV and Wind has a characteristic of power depend on solar irradiation and wind speed based on geographical these areas. This paper presented a control methodology of hybrid small scale PV/Wind energy system that use a fuzzy logic controller (FLC) to extract the maximum power point tracking (MPPT) in different solar irradiation and wind speed. This paper discusses simulation and analysis of the generation process of hybrid resources in MPP and power conditioning unit (PCU) of Photovoltaic (PV) and Wind Turbine (WT) that is connected to the three-phase low voltage electricity grid system (380V) without battery storage. The capacity of the sources used is 2.2 kWp PV and 2.5 kW PMSG (Permanent Magnet Synchronous Generator) -WT power rating. The Modeling of hybrid PV/Wind, as well as integrated power electronics components in grid connected system, are simulated using MATLAB/Simulink.

Keywords: fuzzy MPPT, grid connected inverter, photovoltaic (PV), PMSG wind turbine

Procedia PDF Downloads 355
1621 Study and Simulation of a Dynamic System Using Digital Twin

Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli

Abstract:

Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.

Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models

Procedia PDF Downloads 148
1620 Culture of Human Mesenchymal Stem Cells Culture in Xeno-Free Serum-Free Culture Conditions on Laminin-521

Authors: Halima Albalushi, Mohadese Boroojerdi, Murtadha Alkhabori

Abstract:

Introduction: Maintenance of stem cell properties during culture necessitates the recreation of the natural cell niche. Studies reported the promising outcome of mesenchymal stem cells (MSC) properties maintenance after using extracellular matrix such as CELLstart™, which is the recommended coating material for stem cells cultured in serum-free and xeno-free conditions. Laminin-521 is known as a crucial adhesion protein, which is found in natural stem cell niche, and plays an important role in facilitating the maintenance of self-renewal, pluripotency, standard morphology, and karyotype of human pluripotent stem cells (PSCs). The aim of this study is to investigate the effects of Laminin-521 on human umbilical cord-derived mesenchymal stem cells (UC-MSC) characteristics as a step toward clinical application. Methods: Human MSC were isolated from the umbilical cord via the explant method. Umbilical cord-derived-MSC were cultured in serum-free and xeno-free conditions in the presence of Laminin-521 for six passages. Cultured cells were evaluated by morphology and expansion index for each passage. Phenotypic characterization of UC-MSCs cultured on Laminin-521 was evaluated by assessment of cell surface markers. Results: Umbilical cord derived-MSCs formed small colonies and expanded as a homogeneous monolayer when cultured on Laminin-521. Umbilical cord derived-MSCs reached confluence after 4 days in culture. No statistically significant difference was detected in all passages when comparing the expansion index of UC-MSCs cultured on LN-521 and CELLstart™. Phenotypic characterization of UC-MSCs cultured on LN-521 using flow cytometry revealed positive expression of CD73, CD90, CD105 and negative expression of CD34, CD45, CD19, CD14 and HLA-DR.Conclusion: Laminin-521 is comparable to CELLstart™ in supporting UC-MSCs expansion and maintaining their characteristics during culture in xeno-free and serum-free culture conditions.

Keywords: mesenchymal stem cells, culture, laminin-521, xeno-free serum-free

Procedia PDF Downloads 74
1619 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 486
1618 Efficacy of Gamma Radiation on the Productivity of Bactrocera oleae Gmelin (Diptera: Tephritidae)

Authors: Mehrdad Ahmadi, Mohamad Babaie, Shiva Osouli, Bahareh Salehi, Nadia Kalantaraian

Abstract:

The olive fruit fly, Bactrocera oleae Gmelin (Diptera: Tephritidae), is one of the most serious pests in olive orchards in growing province in Iran. The female lay eggs in green olive fruit and larvae hatch inside the fruit, where they feed upon the fruit matters. One of the main ecologically friendly and species-specific systems of pest control is the sterile insect technique (SIT) which is based on the release of large numbers of sterilized insects. The objective of our work was to develop a SIT against B. oleae by using of gamma radiation for the laboratory and field trial in Iran. Oviposition of female mated by irradiated males is one of the main parameters to determine achievement of SIT. To conclude the sterile dose, pupae were placed under 0 to 160 Gy of gamma radiation. The main factor in SIT is the productivity of females which are mated by irradiated males. The emerged adults from irradiated pupae were mated with untreated adults of the same age by confining them inside the transparent cages. The fecundity of the irradiated males mated with non-irradiated females was decreased with the increasing radiation dose level. It was observed that the number of eggs and also the percentage of the egg hatching was significantly (P < 0.05) affected in either IM x NF crosses compared with NM x NF crosses in F1 generation at all doses. Also, the statistical analysis showed a significant difference (P < 0.05) in the mean number of eggs laid between irradiated and non-irradiated females crossed with irradiated males, which suggests that the males were susceptible to gamma radiation. The egg hatching percentage declined markedly with the increase of the radiation dose of the treated males in mating trials which demonstrated that egg hatch rate was dose dependent. Our results specified that gamma radiation affects the longevity of irradiated B. oleae larvae (established from irradiated pupae) and significantly increased their larval duration. Results show the gamma radiation, and SIT can be used successfully against olive fruit flies.

Keywords: fertility, olive fruit fly, radiation, sterile insect technique

Procedia PDF Downloads 196
1617 Saccharification and Bioethanol Production from Banana Pseudostem

Authors: Elias L. Souza, Noeli Sellin, Cintia Marangoni, Ozair Souza

Abstract:

Among the different forms of reuse and recovery of agro-residual waste is the production of biofuels. The production of second-generation ethanol has been evaluated and proposed as one of the technically viable alternatives for this purpose. This research work employed the banana pseudostem as biomass. Two different chemical pre-treatment methods (acid hydrolisis with H2SO4 2% w/w and alkaline hydrolysis with NaOH 3% w/w) of dry and milled biomass (70 g/L of dry matter, ms) were assessed, and the corresponding reducing sugars yield, AR, (YAR), after enzymatic saccharification, were determined. The effect on YAR by increasing the dry matter (ms) from 70 to 100 g/L, in dry and milled biomass and also fresh, were analyzed. Changes in cellulose crystallinity and in biomass surface morphology due to the different chemical pre-treatments were analyzed by X-ray diffraction and scanning electron microscopy. The acid pre-treatment resulted in higher YAR values, whether related to the cellulose content under saccharification (RAR = 79,48) or to the biomass concentration employed (YAR/ms = 32,8%). In a comparison between alkaline and acid pre-treatments, the latter led to an increase in the cellulose content of the reaction mixture from 52,8 to 59,8%; also, to a reduction of the cellulose crystallinity index from 51,19 to 33,34% and increases in RAR (43,1%) and YAR/ms (39,5%). The increase of dry matter (ms) bran from 70 to 100 g/L in the acid pre-treatment, resulted in a decrease of average yields in RAR (43,1%) and YAR/ms (18,2%). Using the pseudostem fresh with broth removed, whether for 70 g/L concentration or 100 g/L in dry matter (ms), similarly to the alkaline pre-treatment, has led to lower average values in RAR (67,2% and 42,2%) and in YAR/ms (28,4% e 17,8%), respectively. The acid pre-treated and saccharificated biomass broth was detoxificated with different activated carbon contents (1,2 and 4% w/v), concentrated up to AR = 100 g/L and fermented by Saccharomyces cerevisiae. The yield values (YP/AR) and productivity (QP) in ethanol were determined and compared to those values obtained from the fermentation of non-concentrated/non-detoxificated broth (AR = 18 g/L) and concentrated/non-detoxificated broth (AR = 100 g/L). The highest average value for YP/AR (0,46 g/g) was obtained from the fermentation of non-concentrated broth. This value did not present a significant difference (p<0,05) when compared to the YP/RS related to the broth concentrated and detoxificated by activated carbon 1% w/v (YP/AR = 0,41 g/g). However, a higher ethanol productivity (QP = 1,44 g/L.h) was achieved through broth detoxification. This value was 75% higher than the average QP determined using concentrated and non-detoxificated broth (QP = 0,82 g/L.h), and 22% higher than the QP found in the non-concentrated broth (QP = 1,18 g/L.h).

Keywords: biofuels, biomass, saccharification, bioethanol

Procedia PDF Downloads 343