Search results for: Design of Experiments (DOE)
8988 Daylight Performance of a Single Unit in Distinct Arrangements
Authors: Rifat Tabassoom
Abstract:
Recently multistoried housing projects are accelerating in the capital of Bangladesh- Dhaka, to house its massive population. Insufficient background research leads to a building design trend where a single unit is designed and then multiplied all through the buildings. Therefore, although having identical designs, all the units cannot perform evenly considering daylight, which also alters their household activities. This paper aims to understand if a single unit can be an optimum solution regarding daylight for a selected housing project.Keywords: daylight, orientation, performance, simulations
Procedia PDF Downloads 1238987 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites
Authors: A. Kavita Murugkar, B. Anurag Kashyap
Abstract:
With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience
Procedia PDF Downloads 1068986 Evaluation of the Mechanical Behavior of a Retaining Wall Structure on a Weathered Soil through Probabilistic Methods
Authors: P. V. S. Mascarenhas, B. C. P. Albuquerque, D. J. F. Campos, L. L. Almeida, V. R. Domingues, L. C. S. M. Ozelim
Abstract:
Retaining slope structures are increasingly considered in geotechnical engineering projects due to extensive urban cities growth. These kinds of engineering constructions may present instabilities over the time and may require reinforcement or even rebuilding of the structure. In this context, statistical analysis is an important tool for decision making regarding retaining structures. This study approaches the failure probability of the construction of a retaining wall over the debris of an old and collapsed one. The new solution’s extension length will be of approximately 350 m and will be located over the margins of the Lake Paranoá, Brasilia, in the capital of Brazil. The building process must also account for the utilization of the ruins as a caisson. A series of in situ and laboratory experiments defined local soil strength parameters. A Standard Penetration Test (SPT) defined the in situ soil stratigraphy. Also, the parameters obtained were verified using soil data from a collection of masters and doctoral works from the University of Brasília, which is similar to the local soil. Initial studies show that the concrete wall is the proper solution for this case, taking into account the technical, economic and deterministic analysis. On the other hand, in order to better analyze the statistical significance of the factor-of-safety factors obtained, a Monte Carlo analysis was performed for the concrete wall and two more initial solutions. A comparison between the statistical and risk results generated for the different solutions indicated that a Gabion solution would better fit the financial and technical feasibility of the project.Keywords: economical analysis, probability of failure, retaining walls, statistical analysis
Procedia PDF Downloads 4068985 Control of a Quadcopter Using Genetic Algorithm Methods
Authors: Mostafa Mjahed
Abstract:
This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system
Procedia PDF Downloads 4318984 Development of Site-Specific Colonic Drug Delivery System (Nanoparticles) of Chitosan Coated with pH Sensitive Polymer for the Management of Colonic Inflammation
Authors: Pooja Mongia Raj, Rakesh Raj, Alpana Ram
Abstract:
Background: The use of multiparticulate drug delivery systems in preference to single unit dosage forms for colon targeting purposes dates back to 1985 when Hardy and co-workers showed that multiparticulate systems enabled the drug to reach the colon quickly and were retained in the ascending colon for a relatively long period of time. Methods: Site-specific colonic drug delivery system (nanoparticles) of 5-ASA were prepared and coated with pH sensitive polymer. Chitosan nanoparticles (CTNP) bearing 5-Amino salicylic acid (5-ASA) were prepared, by ionotropic gelation method. Nanoparticulate dosage form consisting of a hydrophobic core enteric coated with pH-dependent polymer Eudragit S-100 by solvent evaporation method, for the effective delivery of drug to the colon for treatment of ulcerative colitis. Results: The mean diameter of CTNP and ECTNP formulations were 159 and 661 nm, respectively. Also optimum value of polydispersity index was found to be 0.249 [count rate (kcps) was 251.2] and 0.170 [count rate (kcps) was 173.9] was obtained for both the formulations respectively. Conclusion: CTNP and Eudragit chitosan nanoparticles (ECTNP) was characterized for shape and surface morphology by scanning electron microscopy (SEM) appeared to be spherical in shape. The in vitro drug release was investigated using USP dissolution test apparatus in different simulated GIT fluids showed promising release. In vivo experiments are in further proceeding for fruitful results.Keywords: colon targeting, nanoparticles, polymer, 5-amino salicylic acid, edragit
Procedia PDF Downloads 4958983 Biofuel Production via Thermal Cracking of Castor Methyl Ester
Authors: Roghaieh Parvizsedghy, Seyed Mojtaba Sadrameli
Abstract:
Diminishing oil reserves, deteriorating health standards because of greenhouse gas emissions and associated environmental impacts have emerged biofuel production. Vegetable oils are proved to be valuable feedstock in these growing industries as they are renewable and potentially inexhaustible sources. Thermal Cracking of vegetable oils (triglycerides) leads to production of biofuels which are similar to fossil fuels in terms of composition but their combustion and physical properties have limits. Acrolein (very poisonous gas) and water production during cracking of triglycerides occurs because of presence of glycerin in their molecular structure. Transesterification of vegetable oil is a method to extract glycerol from triglycerides structure and produce methyl ester. In this study, castor methyl ester was used for thermal cracking in order to survey the efficiency of this method to produce bio-gasoline and bio-diesel. Thus, several experiments were designed by means of central composite method. Statistical studies showed that two reaction parameters, namely cracking temperature and feed flowrate, affect products yield significantly. At the optimized conditions (480 °C and 29 g/h) for maximum bio-gasoline production, 88.6% bio-oil was achieved which was distilled and separated as bio-gasoline (28%) and bio-diesel (48.2%). Bio-gasoline exposed a high octane number and combustion heat. Distillation curve and Reid vapor pressure of bio-gasoline fell in the criteria of standard gasoline (class AA) by ASTM D4814. Bio-diesel was compatible with standard diesel by ASTM D975. Water production was negligible and no evidence of acrolein production was distinguished. Therefore, thermal cracking of castor methyl ester could be used as a method to produce valuable biofuels.Keywords: bio-diesel, bio-gasoline, castor methyl ester, thermal cracking, transesterification
Procedia PDF Downloads 2408982 The Future of Insurance: P2P Innovation versus Traditional Business Model
Authors: Ivan Sosa Gomez
Abstract:
Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.Keywords: Insurtech, innovation, business model, P2P, insurance
Procedia PDF Downloads 928981 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 2868980 Deuterium Effect on the Growth of the Fungus Aspergillus Fumigatus and Candida Albicans
Authors: Farzad Doostishoar, Abdolreza Hasanzadeh, Seyed Amin Ayatolahi Mousavi
Abstract:
Introduction and Goals: Deuterium has different action from its isotopes hydrogen in chemical reactions and biochemical processes. It is not a significant difference in heavier atoms between the behavior of heavier isotope and the lighter One but for very lighter atoms it is significant . According to that most of the weight of all creatures body is water natural rate can be significant. In this article we want to study the effect of reduced deuterium on the fungus cell. If we saw the dependence of deuterium concentration of environment on the cells growth we can test this in invivo models too. Methods: First we measured deuterium concentration of the distillated water this analyze was operated by Arak’s heavy water company. Then the deuterium was diluted to ½ ¼ 1/8 1/16 by adding water free of deuterium for making media. In tree of samples the deuterium concentration was increased by adding D2O up to 10,50,100 times more concentrated. For candida albicans growth we used sabor medium and for aspergillus fomigatis growth we used sabor medium containing chloramphenicol. After culturing the funguses species we put the mediums for each species in the shaker incubator for 10 days in 25 centigrade. In different days and times the plates were studied morphologically and some microscopic characteristics were studied too. This experiments and cultures were repeated 3 times. Results: Statistical analyzes by paired-sample T test showed that aspergilus fomigatoos growth was decreased in concentration of 72 ppm( half deuterium concentration of negative control) significantly. In deuterium concentration reduction the growth reduce into the negative control significantly. The project results showed that candida albicans was sensitive to reduce and decrease of the deuterium in all concentrations.Keywords: deuterium, cancer cell, growth, candida albicans
Procedia PDF Downloads 4018979 Synthesis, Characterization of Organic and Inorganic Zn-Al Layered Double Hydroxides and Application for the Uptake of Methyl Orange from Aqueous Solution
Authors: Fatima Zahra Mahjoubi, Abderrahim Khalidi, Mohammed Abdennouri, Noureddine Barka
Abstract:
Zn-Al layered double hydroxides containing carbonate, nitrate and dodecylsulfate as the interlamellar anions have been prepared through a coprecipitation method. The resulting compounds were characterized using XRD, ICP, FTIR, TGA/DTA, TEM/EDX and pHPZC analysis. The XRD patterns revealed that carbonate and nitrate could be intercalated into the interlayer structure with basal spacing of 22.74 and 26.56 Å respectively. Bilayer intercalation of dodecylsulfate molecules was achieved in Zn-Al LDH with a basal spacing of 37.86 Å. The TEM observation indicated that the materials synthesized via coprecipitation present nanoscale LDH particle. The average particle size of Zn-AlCO3 is 150 to 200 nm. Irregular circular to hexagonal shaped particles with 30 to 40 nm in diameter was observed in the Zn-AlNO3 morphology. TEM image of Zn-AlDs display nanostructured sheet like particles with size distribution between 5 to 10 nm. The sorption characteristics and mechanisms of methyl orange dye on organic LDH were investigated and were subsequently compared with that on the inorganic Zn-Al layered double hydroxides. Adsorption experiments for MO were carried out as function of solution pH, contact time and initial dye concentration. The adsorption behavior onto inorganic LDHs was obviously influenced by initial pH. However, the adsorption capacity of organic LDH was influenced indistinctively by initial pH and the removal percentage of MO was practically constant at various value of pH. As the MO concentration increased, the curve of adsorption capacity became L-type onto LDHs. The adsorption behavior for Zn-AlDs was proposed by the dissolution of dye in a hydrophobic interlayer region (i.e., adsolubilization). The results suggested that Zn-AlDs could be applied as a potential adsorbent for MO removal in a wide range of pH.Keywords: adsorption, dodecylsulfate, kinetics, layered double hydroxides, methyl orange removal
Procedia PDF Downloads 2938978 Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia
Authors: Schnell Zsuzsanna
Abstract:
Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms.Keywords: dyslexia, social cognition, transparency, modalities
Procedia PDF Downloads 848977 Mechanisms and Regulation of the Bi-directional Motility of Mitotic Kinesin Nano-motors
Authors: Larisa Gheber
Abstract:
Mitosis is an essential process by which duplicated genetic information is transmitted from mother to daughter cells. Incorrect chromosome segregation during mitosis can lead to genetic diseases, chromosome instability and cancer. This process is mediated by a dynamic microtubule-based intracellular structure, the mitotic spindle. One of the major factors that govern the mitotic spindle dynamics are the kinesin-5 biological nano motors that were believed to move unidirectionally on the microtubule filaments, using ATP hydrolysis, thus performing essential functions in mitotic spindle dynamics. Surprisingly, several reports from our and other laboratories have demonstrated that some kinesin-5 motors are bi-directional: they move in minus-end direction on the microtubules as single-molecules and can switch directionality under a number of conditions. These findings broke a twenty-five-years old dogma regarding kinesin directionality (1, 2). The mechanism of this bi-directional motility and its physiological significance remain unclear. To address this unresolved problem, we apply an interdisciplinary approach combining live cell imaging, biophysical single molecule, and structural experiments to examine the activity of these motors and their mutated variants in vivo and in vitro. Our data shows that factors such as protein phosphorylation (3, 4), motor clustering on the microtubules (5, 6) and structural elements (7, 8) regulate the bi-directional motility of kinesin motors. We also show, using Cryo-EM, that bi-directional kinesin motors obtain non-canonical microtubule binding, which is essential to their special motile properties and intracellular functions. We will discuss the implication of these findings to mechanism bi-directional motility and physiological roles in mitosis.Keywords: mitosis, cancer, kinesin, microtubules, biochemistry, biophysics
Procedia PDF Downloads 818976 Energy-Efficient Storage of Methane Using Biosurfactant in the Form of Clathrate Hydrate
Authors: Abdolreza Farhadian, Anh Phan, Zahra Taheri Rizi, Elaheh Sadeh
Abstract:
The utilization of solidified gas technology based on hydrates exhibits considerable promise for carbon capture, storage, and natural gas transportation applications. The pivotal factor impeding the industrial implementation of hydrates lies in the need for efficient and non-foaming promoters. In this study, a biosurfactant with sulfonate, amide, and carboxyl groups (BS) was synthesized as a methane hydrate formation promoter, replicating the chemical characteristics of amino acids and sodium dodecyl sulfate (SDS). The synthesis of BS follows a simple, three-step process that is amenable to industrial scale production. The first two steps of the process are solvent-free, which helps reduce potential environmental impacts and makes scaling up more feasible. Additionally, the final step utilizes a water-isopropanol mixture, which is an easily accessible and cost-effective solvent system for large-scale production. High-pressure autoclave experiments demonstrated a significant enhancement in methane hydrate formation kinetics with low BS concentrations. 50 ppm of BS yielded a maximum water-to-hydrate conversion of 66.9%, equivalent to a storage capacity of 119.9 v/v in distilled water. With increasing BS concentration to 500 ppm, the conversion degree and storage capacity reached 97% and 162.6 v/v, respectively. Molecular dynamic simulation revealed that BS molecules acted as collectors for methane molecules, augmenting hydrate growth rate and increasing the number of hydrate cavities. Additionally, BS demonstrated a biodegradability exceeding 60% within 28 days.Keywords: solidified methane, gas storage, gas hydrates, green surfactant, gas hydrate promoter, computational simulation, sustainability
Procedia PDF Downloads 58975 Effect of Pole Weight on Nordic Walking
Authors: Takeshi Sato, Mizuki Nakajima, Macky Kato, Shoji Igawa
Abstract:
The purpose of study was to investigate the effect of varying pole weights on energy expenditure, upper limb and lower limb muscle activity as Electromyogram during Nordic walking (NW). Four healthy men [age = 22.5 (±1.0) years, body mass = 61.4 (±3.6) kg, height = 170.3 (±4.3) cm] and three healthy women [age = 22.7 (±2.9) years, body mass = 53.0 (±1.7) kg, height = 156.7 (±4.5) cm] participated in the experiments after informed consent. Seven healthy subjects were tested on the treadmill, walking, walking (W) with Nordic Poles (NW) and walking with 1kg weight Nordic Poles (NW+1). Walking speed was 6 km per hours in all trials. Eight EMG activities were recorded by bipolar surface methods in biceps brachii, triceps brachii, trapezius, deltoideus, tibialis anterior, medial gastrocnemius, rectus femoris and biceps femoris muscles. And heart rate (HR), oxygen uptake (VO2), and rate of perceived exertion (RPE) were measured. The level of significance was set at a = 0.05, with p < 0.05 regarded as statistically significant. Our results confirmed that use of NW poles increased HR at a given upper arm muscle activity but decreased lower limb EMGs in comparison with W. Moreover NW was able to increase more step lengths with hip joint extension during NW rather than W. Also, EMG revealed higher activation of upper limb for almost all NW and 1kgNW tests plus added masses compared to W (p < 0.05). Therefore, it was thought either of NW and 1kgNW were to have benefit as a physical exercise for safe, feasible, and readily training for a wide range of aged people in the quality of daily life. However, there was no significant effected in leg muscles activity by using 1kgNW except for upper arm muscle activity during Nordic pole walking.Keywords: Nordic walking, electromyogram, heart rate, RPE
Procedia PDF Downloads 2398974 Evaluation of Stable Isotope in Life History and Mating Behaviour of Mediterranean Fruit Fly Ceratitis capitata (Diptera: Tephidae) in Laboratory Conditions
Authors: Hasan AL-Khshemawee, Manjree Agarwal, Xin Du, Yonglin Ren
Abstract:
The possibility use of stable isotopes to study Medfly mating and life history were investigated in these experiments. 13C6 glucose was incorporated in the diet of the Mediterranean fruit fly Ceratitis capitata (Diptera: Tephidae). Treatments included labelling and unlabelled of either the media or adult sugar water. The measured started from egg hatching till the adults have died. After mating, the adults were analysed for 13C6 glucose ratio using Liquid chromatography-mass spectrometry LC-MS in two periods of time immediately and after three days of mating. Results showed that stable isotopes were used successfully for labelling Medfly in laboratory conditions, and there were significant differences between labelled and unlabelled treatment in eggs hatching, larval development, pupae emergence, survival of adults and mating behaviour. Labelling during larval development and combined labelling of larvae and adults resulted in detectable values. The label glucose in larvae stage did not effect on mating behaviour, however, the label glucose in adults’ stage was affected by mating behaviour. We recommended that it is possible to label adults of Mediterranean fruit fly C. capitata and detected the label after mating. This method offers good tools to study mating behaviour in Medfly and other types of insects and could be providing useful tools in genetic studies, sterile insect technique (SIT) or agricultural pest management. Also, we recommended using this technique in the field.Keywords: stable isotope, sterile insect technique (SIT), medfly, mating behaviour
Procedia PDF Downloads 2568973 Thermolysin Entrapment in a Gold Nanoparticles/Polymer Composite: Construction of an Efficient Biosensor for Ochratoxin a Detection
Authors: Fatma Dridi, Mouna Marrakchi, Mohammed Gargouri, Alvaro Garcia Cruz, Sergei V. Dzyadevych, Francis Vocanson, Joëlle Saulnier, Nicole Jaffrezic-Renault, Florence Lagarde
Abstract:
An original method has been successfully developed for the immobilization of thermolysin onto gold interdigitated electrodes for the detection of ochratoxin A (OTA) in olive oil samples. A mix of polyvinyl alcohol (PVA), polyethylenimine (PEI) and gold nanoparticles (AuNPs) was used. Cross-linking sensors chip was made by using a saturated glutaraldehyde (GA) vapor atmosphere in order to render the two polymers water stable. Performance of AuNPs/ (PVA/PEI) modified electrode was compared to a traditional immobilized enzymatic method using bovine serum albumin (BSA). Atomic force microscopy (AFM) experiments were employed to provide a useful insight into the structure and morphology of the immobilized thermolysin composite membranes. The enzyme immobilization method influence the topography and the texture of the deposited layer. Biosensors optimization and analytical characteristics properties were studied. Under optimal conditions AuNPs/ (PVA/PEI) modified electrode showed a higher increment in sensitivity. A 700 enhancement factor could be achieved with a detection limit of 1 nM. The newly designed OTA biosensors showed a long-term stability and good reproducibility. The relevance of the method was evaluated using commercial doped olive oil samples. No pretreatment of the sample was needed for testing and no matrix effect was observed. Recovery values were close to 100% demonstrating the suitability of the proposed method for OTA screening in olive oil.Keywords: thermolysin, A. ochratoxin , polyvinyl alcohol, polyethylenimine, gold nanoparticles, olive oil
Procedia PDF Downloads 5908972 Impact of Heat Moisture Treatment on the Yield of Resistant Starch and Evaluation of Functional Properties of Modified Mung Bean (Vigna radiate) Starch
Authors: Sreejani Barua, P. P. Srivastav
Abstract:
Formulation of new functional food products for diabetes patients and obsessed people is a challenge for food industries till date. Starch is a certainly happening, ecological, reasonable and profusely obtainable polysaccharide in plant material. In the present scenario, there is a great interest in modifying starch functional properties without destroying its granular structure using different modification techniques. Resistant starch (RS) contains almost zero calories and can control blood glucose level to prevent diabetes. The current study focused on modification of mung bean starch which is a good source of legumes carbohydrate for the production of functional food. Heat moisture treatment (HMT) of mung starch was conducted at moisture content of 10-30%, temperature of 80-120 °C and time of 8-24 h.The content of resistant starch after modification was significantly increased from native starches containing RS 7.6%. The design combinations of HMT had been completed through Central Composite Rotatable Design (CCRD). The effects of HMT process variables on the yield of resistant starch was studied through Rapid Surface Methodology (RSM). The highest increase of resistant starch was found up to 34.39% when treated the native starch with 30% m.c at 120 °C temperature for 24 h.The functional properties of both native and modified mung bean starches showed that there was a reduction in the swelling power and swelling volume of HMT starches. However, the solubility of the HMT starches was higher than that of untreated native starch and also observed change in structural (scanning electron microscopy), X-Ray diffraction (XRD) pattern, blue value and thermal (differential scanning calorimetry) properties. Therefore, replacing native mung bean starch with heat-moisture treated mung bean starch leads to the development of new products with higher resistant starch levels and functional properties.Keywords: Mung bean starch, heat moisture treatment, functional properties, resistant starch
Procedia PDF Downloads 2028971 Improved Water Productivity by Deficit Irrigation: Implications for Water Saving in Orange, Olive and Vineyard Orchards in Arid Conditions of Tunisia
Authors: K. Nagaz, F. El Mokh, M. Masmoudi, N. Ben Mechlia, M. O. Baba Sy, G. Ghiglieri
Abstract:
Field experiments on deficit irrigation (DI) were performed in Médenine, Tunisia on drip-irrigated olive, orange and grapevine orchards during 2013 and 2014. Four irrigation treatments were compared: full irrigation (FI), which was irrigated at 100% of ETc for the whole season; two deficit irrigation (DI) strategies -DI75 and DI50- which received, respectively, 25 and 50% less water than FI; and traditional farming management (FM) - with water input much less than actually needed. The traditional farming (FM) applied 11, 18, 30 and 33% less water than the FI treatment, respectively, in orange, grapevine and table and oil olive orchards, indicating that the farmers practices represent a form of unintended deficit irrigation. Yield was reduced when deficit irrigation was applied and there were significant differences between DI75, DI50 and FM treatments. Significant differences were not observed between DI50 and FM treatments even though numerically smaller yield was observed in the former (DI50) as compared to the latter (FM). The irrigation water productivity (IWP) was significantly affected by irrigation treatments. The smallest IWP was recorded under the FI treatment, while the largest IWP was obtained under the deficit irrigation treatment (DI50). The DI50 and FM treatments reduced the economic return compared to the full treatment (FI), while the DI75 treatment resulted in a better economic return in respect to DI50 and FM. Full irrigation (FI) could be recommended for olive, orange and grapevine irrigation under the arid climate of Tunisia. Nevertheless, the treatment DI75 can be applied as a strategy under water scarcity conditions in commercial olive, orange and grapevine orchards allowing water savings up to 25% but with some reduction in yield and net return. The results would be helpful in adopting deficit irrigation in ways that enhance net financial returns.Keywords: water productivity, deficit irrigation, drip irrigation, orchards
Procedia PDF Downloads 2238970 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 2958969 Effect of Volute Tongue Shape and Position on Performance of Turbo Machinery Compressor
Authors: Anuj Srivastava, Kuldeep Kumar
Abstract:
This paper proposes a numerical study of volute tongue design, which affects the centrifugal compressor operating range and pressure recovery. Increased efficiency has been the traditional importance of compressor design. However, the increased operating range has become important in an age of ever-increasing productivity and energy costs in the turbomachinery industry. Efficiency and overall operating range are the two most important parameters studied to evaluate the aerodynamic performance of centrifugal compressor. Volute is one of the components that have significant effect on these two parameters. Choice of volute tongue geometry has major role in compressor performance, also affects performance map. The author evaluates the trade-off on using pull-back tongue geometry on centrifugal compressor performance. In present paper, three different tongue positions and shapes are discussed. These designs are compared in terms of pressure recovery coefficient, pressure loss coefficient, and stable operating range. The detailed flow structures for various volute geometries and pull back angle near tongue are studied extensively to explore the fluid behavior. The viscous Navier-Stokes equations are used to simulate the flow inside the volute. The numerical calculations are compared with thermodynamic 1-D calculations. Author concludes that the increment in compression ratio accompanies with more uniform pressure distribution in the modified tongue shape and location, a uniform static pressure around the circumferential which build a more uniform flow in the impeller and diffuser. Also, the blockage at the tongue of the volute was causing circumferentially nonuniformed pressure along the volute. This nonuniformity may lead impeller and diffuser to operate unstably. However, it is not the volute that directly controls the stall.Keywords: centrifugal compressor volute, tongue geometry, pull-back, compressor performance, flow instability
Procedia PDF Downloads 1638968 Digital Twins: Towards an Overarching Framework for the Built Environment
Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie
Abstract:
Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.Keywords: digital twins, decision-making, design, net-zero, built environment
Procedia PDF Downloads 1228967 Determination of Optimum Conditions for the Leaching of Oxidized Copper Ores with Ammonium Nitrate
Authors: Javier Paul Montalvo Andia, Adriana Larrea Valdivia, Adolfo Pillihuaman Zambrano
Abstract:
The most common lixiviant in the leaching process of copper minerals is H₂SO₄, however, the current situation requires more environmentally friendly reagents and in certain situations that have a lower consumption due to the presence of undesirable gangue as muscovite or kaolinite that can make the process unfeasible. The present work studied the leaching of an oxidized copper mineral in an aqueous solution of ammonium nitrate, in order to obtain the optimum leaching conditions of the copper contained in the malachite mineral from Peru. The copper ore studied comes from a deposit in southern Peru and was characterized by X-ray diffractometer, inductively coupled-plasma emission spectrometer (ICP-OES) and atomic absorption spectrophotometry (AAS). The experiments were developed in batch reactor of 600 mL where the parameters as; temperature, pH, ammonium nitrate concentration, particle size and stirring speed were controlled according to experimental planning. The sample solution was analyzed for copper by atomic absorption spectrophotometry (AAS). A simulation in the HSC Chemistry 6.0 program showed that the predominance of the copper compounds of a Cu-H₂O aqueous system is altered by the presence in the system of ammonium complexes, the compound being thermodynamically more stable Cu(NH3)₄²⁺, which predominates in pH ranges from 8.5 to 10 at a temperature of 25 °C. The optimum conditions for copper leaching of the malachite mineral were a stirring speed of 600 rpm, an ammonium nitrate concentration of 4M, a particle diameter of 53 um and temperature of 62 °C. These results showed that the leaching of copper increases with increasing concentration of the ammonium solution, increasing the stirring rate, increasing the temperature and decreasing the particle diameter. Finally, the recovery of copper in optimum conditions was above 80%.Keywords: ammonium nitrate, malachite, copper oxide, leaching
Procedia PDF Downloads 1898966 Effectiveness of Breathing Training Program on Quality of Life and Depression Among Hemodialysis Patients: Quasi‐Experimental Study
Authors: Hayfa Almutary, Noof Eid Al Shammari
Abstract:
Aim: The management of depression in patients undergoing hemodialysis remains challenging. The aim of this study was to evaluate the effectiveness of a breathing training program on quality of life and depression among patients on hemodialysis. Design: A one-group pretest-posttest quasi-experimental design was used. Methods: Data were collected from hemodialysis units at three dialysis centers. Initial baseline data were collected, and a breathing training program was implemented. The breathing training program included three types of breathing exercises. The impact of the intervention on outcomes was measured using both the Kidney Disease Quality of Life Short Version and the Beck Depression Inventory-Second Edition from the same participants. The participants were asked to perform the breathing training program three times a day for 30 days. Results: The mean age of the patients was 52.1 (SD:15.0), with nearly two-thirds of them being male (63.4%). Participants who were undergoing hemodialysis for 1–4 years constituted the largest number of the sample (46.3%), and 17.1% of participants had visited a psychiatric clinic 1-3 times. The results show that the breathing training program improved overall quality of life and reduced symptoms and problems. In addition, a significant decrease in the overall depression score was observed after implementing the intervention. Conclusions: The breathing training program is a non-pharmacological intervention that has proven visible effectiveness in hemodialysis. This study demonstrated that using breathing exercises reduced depression levels and improved quality of life. The integration of this intervention in dialysis units to manage psychological issues will offer a simple, safe, easy, and inexpensive intervention. Future research should compare the effectiveness of various breathing exercises in hemodialysis patients using longitudinal studies. Impact: As a safety precaution, nurses should initially use non-pharmacological interventions, such as a breathing training program, to treat depression in those undergoing hemodialysis.Keywords: breathing training program, depression, exercise, quality of life, hemodialysis
Procedia PDF Downloads 868965 Review on the Role of Sustainability Techniques in Development of Green Building
Authors: Ubaid Ur Rahman, Waqar Younas, Sooraj Kumar Chhabira
Abstract:
Environmentally sustainable building construction has experienced significant growth during the past 10 years at international level. This paper shows that the conceptual framework adopts sustainability techniques in construction to develop environment friendly building called green building. Waste occurs during the different construction phases which causes the environmental problems like, deposition of waste on ground surface creates major problems such as bad smell. It also gives birth to different health diseases and produces toxic waste agent which is specifically responsible for making soil infertile. Old recycled building material is used in the construction of new building. Sustainable construction is economical and saves energy sources. Sustainable construction is the major responsibility of designer and project manager. The designer has to fulfil the client demands while keeping the design environment friendly. Project manager has to deliver and execute sustainable construction according to sustainable design. Steel is the most appropriate sustainable construction material. It is more durable and easily recyclable. Steel occupies less area and has more tensile and compressive strength than concrete, making it a better option for sustainable construction as compared to other building materials. New technology like green roof has made the environment pleasant, and has reduced the construction cost. It minimizes economic, social and environmental issues. This paper presents an overview of research related to the material use of green building and by using this research recommendation are made which can be followed in the construction industry. In this paper, we go through detailed analysis on construction material. By making suitable adjustments to project management practices it is shown that a green building improves the cost efficiency of the project, makes it environmental friendly and also meets future generation demands.Keywords: sustainable construction, green building, recycled waste material, environment
Procedia PDF Downloads 2458964 Building Information Management Advantages, Adaptation, and Challenges of Implementation in Kabul Metropolitan Area
Authors: Mohammad Rahim Rahimi, Yuji Hoshino
Abstract:
Building Information Management (BIM) at recent years has widespread consideration on the Architecture, Engineering and Construction (AEC). BIM has been bringing innovation in AEC industry and has the ability to improve the construction industry with high quality, reduction time and budget of project. Meanwhile, BIM support model and process in AEC industry, the process include the project time cycle, estimating, delivery and generally the way of management of project but not limited to those. This research carried the BIM advantages, adaptation and challenges of implementation in Kabul region. Capital Region Independence Development Authority (CRIDA) have responsibilities to implement the development projects in Kabul region. The method of study were considers on advantages and reasons of BIM performance in Afghanistan based on online survey and data. Besides that, five projects were studied, the reason of consideration were many times design revises and changes. Although, most of the projects had problems regard to designing and implementation stage, hence in canal project was discussed in detail with the main reason of problems. Which were many time changes and revises due to the lack of information, planning, and management. In addition, two projects based on BIM utilization in Japan were also discussed. The Shinsuizenji Station and Oita River dam projects. Those are implemented and implementing consequently according to the BIM requirements. The investigation focused on BIM usage, project implementation process. Eventually, the projects were the comparison with CRIDA and BIM utilization in Japan. The comparison will focus on the using of the model and the way of solving the problems based upon on the BIM. In conclusion, that BIM had the capacity to prevent many times design changes and revises. On behalf of achieving those objectives are required to focus on data management and sharing, BIM training and using new technology.Keywords: construction information management, implementation and adaptation of BIM, project management, developing countries
Procedia PDF Downloads 1298963 An Ultrasonic Approach to Investigate the Effect of Aeration on Rheological Properties of Soft Biological Materials with Bubbles Embedded
Authors: Hussein M. Elmehdi
Abstract:
In this paper, we present the results of our recent experiments done to examine the effect of air bubbles, which were introduced to bio-samples during preparation, on the rheological properties of soft biological materials. To effectively achieve this, we three samples each prepared with differently. Our soft biological systems comprised of three types of flour dough systems made from different flour varieties with variable protein concentrations. The samples were investigated using ultrasonic waves operated at low frequency in transmission mode. The sample investigated included dough made from bread flour, wheat flour and all-purpose flour. During mixing, the main ingredient of the samples (the flour) was transformed into cohesive dough comprised of the continuous dough matrix and air pebbles. The rheological properties of such materials determine the quality of the end cereal product. Two ultrasonic parameters, the longitudinal velocity and attenuation coefficient were found to be very sensitive to properties such as the size of the occluded bubbles, and hence have great potential of providing quantitative evaluation of the properties of such materials. The results showed that the magnitudes of the ultrasonic velocity and attenuation coefficient peaked at optimum mixing times; the latter of which is taken as an indication of the end of the mixing process. There was an agreement between the results obtained by conventional rheology and ultrasound measurements, thus showing the potential of the use of ultrasound as an on-line quality control technique for dough-based products. The results of this work are explained with respect to the molecular changes occurring in the dough system as the mixing process proceeds; particular emphasis is placed on the presence of free water and bound water.Keywords: ultrasound, soft biological materials, velocity, attenuation
Procedia PDF Downloads 2778962 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge
Authors: Yulan Wu
Abstract:
The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 738961 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 928960 Molecular Design and Synthesis of Heterocycles Based Anticancer Agents
Authors: Amna J. Ghith, Khaled Abu Zid, Khairia Youssef, Nasser Saad
Abstract:
Backgrounds: The multikinase and vascular endothelial growth factor (VEGF) receptor inhibitors interrupt the pathway by which angiogenesis becomes established and promulgated, resulting in the inadequate nourishment of metastatic disease. VEGFR-2 has been the principal target of anti-angiogenic therapies. We disclose the new thieno pyrimidines as inhibitors of VEGFR-2 designed by a molecular modeling approach with increased synergistic activity and decreased side effects. Purpose: 2-substituted thieno pyrimidines are designed and synthesized with anticipated anticancer activity based on its in silico molecular docking study that supports the initial pharmacophoric hypothesis with a same binding mode of interaction at the ATP-binding site of VEGFR-2 (PDB 2QU5) with high docking score. Methods: A series of compounds were designed using discovery studio 4.1/CDOCKER with a rational that mimic the pharmacophoric features present in the reported active compounds that targeted VEGFR-2. An in silico ADMET study was also performed to validate the bioavailability of the newly designed compounds. Results: The Compounds to be synthesized showed interaction energy comparable to or within the range of the benzimidazole inhibitor ligand when docked with VEGFR-2. ADMET study showed comparable results most of the compounds showed absorption within (95-99) zone varying according to different substitutions attached to thieno pyrimidine ring system. Conclusions: A series of 2-subsituted thienopyrimidines are to be synthesized with anticipated anticancer activity and according to docking study structure requirement for the design of VEGFR-2 inhibitors which can act as powerful anticancer agents.Keywords: docking, discovery studio 4.1/CDOCKER, heterocycles based anticancer agents, 2-subsituted thienopyrimidines
Procedia PDF Downloads 2468959 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins
Authors: Manju Kanu, Subrata Sinha, Surabhi Johari
Abstract:
Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.Keywords: epitope, b cell, immunogenicity, ebola
Procedia PDF Downloads 314