Search results for: extraction tool
5821 In Vitro Antioxidant and Cytotoxic Activities Against Human Oral Cancer and Human Laryngeal Cancer of Limonia acidissima L. Bark Extracts
Authors: Kriyapa lairungruang, Arunporn Itharat
Abstract:
Limonia acidissima L. (LA) (Common name: wood apple, Thai name: ma-khwit) is a medicinal plant which has long been used in Thai traditional medicine. Its bark is used for treatment of diarrhea, abscess, wound healing and inflammation and it is also used in oral cancer. Thus, this research aimed to investigate antioxidant and cytotoxic activities of the LA bark extracts produced by various extraction methods. Different extraction procedures were used to extract LA bark for biological activity testing: boiling in water, maceration with 95% ethanol, maceration with 50% ethanol and water boiling of each the 95% and the 50% ethanolic residues. All extracts were tested for antioxidant activity using DPPH radical scavenging assay, cytotoxic activity against human laryngeal epidermoid carcinoma (HEp-2) cells and human oral epidermoid carcinoma (KB) cells using sulforhodamine B (SRB) assay. The results found that the 95% ethanolic extract of LA bark showed the highest antioxidant activity with EC50 values of 29.76±1.88 µg/ml. For cytotoxic activity, the 50% ethanolic extract showed the best cytotoxic activity against HEp-2 and KB cells with IC50 values of 9.55±1.68 and 18.90±0.86 µg/ml, respectively. This study demonstrated that the 95% ethanolic extract of LA bark showed moderate antioxidant activity and the 50% ethanolic extract provided potent cytotoxic activity against HEp-2 and KB cells. These results confirm the traditional use of LA for the treatment of oral cancer and laryngeal cancer, and also support its ongoing use.Keywords: antioxidant activity, cytotoxic activity, Laryngeal epidermoid carcinoma, Limonia acidissima L., oral epidermoid carcinoma
Procedia PDF Downloads 4785820 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 595819 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 5185818 Physico-Mechanical Behavior of Indian Oil Shales
Authors: K. S. Rao, Ankesh Kumar
Abstract:
The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior
Procedia PDF Downloads 3475817 Application of Life Cycle Assessment “LCA” Approach for a Sustainable Building Design under Specific Climate Conditions
Authors: Djeffal Asma, Zemmouri Noureddine
Abstract:
In order for building designer to be able to balance environmental concerns with other performance requirements, they need clear and concise information. For certain decisions during the design process, qualitative guidance, such as design checklists or guidelines information may not be sufficient for evaluating the environmental benefits between different building materials, products and designs. In this case, quantitative information, such as that generated through a life cycle assessment, provides the most value. LCA provides a systematic approach to evaluating the environmental impacts of a product or system over its entire life. In the case of buildings life cycle includes the extraction of raw materials, manufacturing, transporting and installing building components or products, operating and maintaining the building. By integrating LCA into building design process, designers can evaluate the life cycle impacts of building design, materials, components and systems and choose the combinations that reduce the building life cycle environmental impact. This article attempts to give an overview of the integration of LCA methodology in the context of building design, and focuses on the use of this methodology for environmental considerations concerning process design and optimization. A multiple case study was conducted in order to assess the benefits of the LCA as a decision making aid tool during the first stages of the building design under specific climate conditions of the North East region of Algeria. It is clear that the LCA methodology can help to assess and reduce the impact of a building design and components on the environment even if the process implementation is rather long and complicated and lacks of global approach including human factors. It is also demonstrated that using LCA as a multi objective optimization of building process will certainly facilitates the improvement in design and decision making for both new design and retrofit projects.Keywords: life cycle assessment, buildings, sustainability, elementary schools, environmental impacts
Procedia PDF Downloads 5465816 Investigation of Type and Concentration Effects of Solvent on Chemical Properties of Saffron Edible Extract
Authors: Sharareh Mohseni
Abstract:
Purpose: The objective of this study was to find a suitable solvent to produce saffron edible extract with improved chemical properties. Design/methodology/approach: Dried and pulverized stigmas of C. sativus L. (10g) was extracted with 300 ml of solvents including: distillated water (DW), ethanol/DW, methanol/DW, propylene glycol/DW, heptan/DW, and hexan/DW, for 3 days at 25°C and then centrifuged at 3000 rpm. Then the extracts were evaporated using rotary evaporator at 40°C. The fiber and solvent-free extracts were then analyzed by UV spectrophotometer to detect saffron quality parameters including crocin, picrocrocin and safranal. Findings: Distilled water/ethanol mixture as the extraction solvent, caused larger amounts of the plant constituents to diffuse out to the extract compared to other treatments and also control. Polar solvents including distilled water, ethanol, and propylene glycol (except methanol) were more effective in extracting crocin, picrocrocin, and saffranal than non-polar solvents. Social implications: Due to an enhancement of color and flavor, saffron extract is economical compared to natural saffron. Saffron Extract saves on preparation time and reduces the amount of saffron required for imparting the same flavor, as compared to dry saffron. Liquid extract is easier to use and standardize in food preparations compared to dry stamens and can be dosed precisely compared to natural saffron. Originality/value: No research had been done on production of saffron edible extract using the solvent studied in this survey. The novelty of this research is high and the results can be used industrially.Keywords: Crocus sativus L., saffron extract, solvent extraction, distilled water
Procedia PDF Downloads 4485815 Effects of Different Mechanical Treatments on the Physical and Chemical Properties of Turmeric
Authors: Serpa A. M., Gómez Hoyos C., Velásquez-Cock J. A., Ruiz L. F., Vélez Acosta L. M., Gañan P., Zuluaga R.
Abstract:
Turmeric (Curcuma Longa L) is an Indian rhizome known for its biological properties, derived from its active compounds such as curcuminoids. Curcumin, the main polyphenol in turmeric, only represents around 3.5% of the dehydrated rhizome and extraction yields between 41 and 90% have been reported. Therefore, for every 1000 tons of turmeric powder used for the extraction of curcumin, around 970 tons of residues are generated. The present study evaluates the effect of different mechanical treatments (waring blender, grinder and high-pressure homogenization) on the physical and chemical properties of turmeric, as an alternative for the transformation of the entire rhizome. Suspensions of turmeric (10, 20 y 30%) were processed by waring blender during 3 min at 12000 rpm, while the samples treated by grinder were processed evaluating two different Gaps (-1 and -1,5). Finally, the process by high-pressure homogenization, was carried out at 500 bar. According to the results, the luminosity of the samples increases with the severity of the mechanical treatment, due to the stabilization of the color associated with the inactivation of the oxidative enzymes. Additionally, according to the microstructure of the samples, the process by grinder (Gap -1,5) and by high-pressure homogenization allowed the largest size reduction, reaching sizes up to 3 m (measured by optical microscopy). This processes disrupts the cells and breaks their fragments into small suspended particles. The infrared spectra obtained from the samples using an attenuated total reflectance accessory indicates changes in the 800-1200 cm⁻¹ region, related mainly to changes in the starch structure. Finally, the thermogravimetric analysis shows the presence of starch, curcumin and some minerals in the suspensions.Keywords: characterization, mechanical treatments, suspensions, turmeric rhizome
Procedia PDF Downloads 1635814 Railway Transport as a Potential Source of Polychlorinated Biphenyls in Soil
Authors: Nataša Stojić, Mira Pucarević, Nebojša Ralević, Vojislava Bursić, Gordan Stojić
Abstract:
Surface soil (0 – 10 cm) samples from 52 sampling sites along the length of railway tracks on the territory of Srem (the western part of the Autonomous Province of Vojvodina, itself part of Serbia) were collected and analyzed for 7 polychlorinated biphenyls (PCBs) in order to see how the distance from the railroad on the one hand and dump on the other hand, affect the concentration of PCBs (CPCBs) in the soil. Samples were taken at a distance of 0.03 to 4.19 km from the railway and 0.43 to 3.35 km from the landfills. For the soil extraction the Soxhlet extraction (USEPA 3540S) was used. The extracts were purified on a silica-gel column (USEPA 3630C). The analysis of the extracts was performed by gas chromatography with tandem mass spectrometry. PCBs were not detected only at two locations. Mean total concentration of PCBs for all other sampling locations was 0,0043 ppm dry weight (dw) with a range of 0,0005 to 0,0227 ppm dw. On the part of the data that were interesting for this research with statistical methods (PCA) were isolated factors that affect the concentration of PCBs. Data were also analyzed using the Pearson's chi-squared test which showed that the hypothesis of independence of CPCBs and distance from the railway can be rejected. Hypothesis of independence between CPCB and the percentage of humus in the soil can also be rejected, in contrast to dependence of CPCB and the distance from the landfill where the hypothesis of independence cannot be rejected. Based on these results can be said that railway transport is a potential source of PCBs. The next step in this research is to establish the position of transformers which are located near sampling sites as another important factor that affects the concentration of PCBs in the soil.Keywords: GC/MS, landfill, PCB, railway, soil
Procedia PDF Downloads 3355813 Robust Diagnosability of PEMFC Based on Bond Graph LFT
Authors: Ould Bouamama, M. Bressel, D. Hissel, M. Hilairet
Abstract:
Fuel cell (FC) is one of the best alternatives of fossil energy. Recently, the research community of fuel cell has shown a considerable interest for diagnosis in view to ensure safety, security, and availability when faults occur in the process. The problematic for model based FC diagnosis consists in that the model is complex because of coupling of several kind of energies and the numerical values of parameters are not always known or are uncertain. The present paper deals with use of one tool: the Linear Fractional Transformation bond graph tool not only for uncertain modelling but also for monitorability (ability to detect and isolate faults) analysis and formal generation of robust fault indicators with respect to parameter uncertainties.The developed theory applied to a nonlinear FC system has proved its efficiency.Keywords: bond graph, fuel cell, fault detection and isolation (FDI), robust diagnosis, structural analysis
Procedia PDF Downloads 3665812 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning
Authors: Gabriel Michel
Abstract:
The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.Keywords: pedagogy, cooperation, university, learning environment
Procedia PDF Downloads 225811 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules
Authors: Mohsen Maraoui
Abstract:
In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing
Procedia PDF Downloads 1415810 Application of Aquatic Plants for the Remediation of Organochlorine Pesticides from Keenjhar Lake
Authors: Soomal Hamza, Uzma Imran
Abstract:
Organochlorine pesticides bio-accumulate into the fat of fish, birds, and animals through which it enters the human food cycle. Due to their persistence and stability in the environment, many health impacts are associated with them, most of which are carcinogenic in nature. In this study, the level of organochlorine pesticides has been detected in Keenjhar Lake and remediated using Rhizoremediation technique. 14 OC pesticides namely, Aldrin, Deldrin, Heptachlor, Heptachlor epoxide, Endrin, Endosulfun I and II, DDT, DDE, DDD, Alpha, Beta, Gamma BHC and two plants namely, Water Hyacinth and Slvinia Molesta were used in the system using pot experiment which processed for 11 days. A consortium was inoculated in both plants to increase its efficiency. Water samples were processed using liquide-liquid extraction. Sediments and roots samples were processed using Soxhlet method followed by clean-up and Gas Chromatography. Delta-BHC was the predominantly found in all samples with mean concentration (ppb) and standard deviation of 0.02 ± 0.14, 0.52 ± 0.68, 0.61 ± 0.06, in Water, Sediments and Roots samples respectively. The highest levels were of Endosulfan II in the samples of water, sediments and roots. Water Hyacinth proved to be better bioaccumulaor as compared to Silvinia Molesta. The pattern of compounds reduction rate by the end of experiment was Delta-BHC>DDD > Alpha-BHC > DDT> Heptachlor> H.Epoxide> Deldrin> Aldrin> Endrin> DDE> Endosulfun I > Endosulfun II. Not much significant difference was observed between the pots with the consortium and pots without the consortium addition. Phytoremediation is a promising technique, but more studies are required to assess the bioremediation potential of different aquatic plants and plant-endophyte relationship.Keywords: aquatic plant, bio remediation, gas chromatography, liquid liquid extraction
Procedia PDF Downloads 1495809 Towards a Competitive South African Tooling Industry
Authors: Mncedisi Trinity Dewa, Andre Francois Van Der Merwe, Stephen Matope
Abstract:
Tool, Die and Mould-making (TDM) firms have been known to play a pivotal role in the growth and development of the manufacturing sectors in most economies. Their output contributes significantly to the quality, cost and delivery speed of final manufactured parts. Unfortunately, the South African Tool, Die and Mould-making manufacturers have not been competing on the local or global market in a significant way. This reality has hampered the productivity and growth of the sector thus attracting intervention. The paper explores the shortcomings South African toolmakers have to overcome to restore their competitive position globally. Results from a global benchmarking survey on the tooling sector are used to establish a roadmap of what South African toolmakers can do to become a productive, World Class force on the global market.Keywords: competitive performance objectives, toolmakers, world-class manufacturing, lead times
Procedia PDF Downloads 5195808 The Study of Spray Drying Process for Skimmed Coconut Milk
Authors: Jaruwan Duangchuen, Siwalak Pathaveerat
Abstract:
Coconut (Cocos nucifera) belongs to the family Arecaceae. Coconut juice and meat are consumed as food and dessert in several regions of the world. Coconut juice contains low proteins, and arginine is the main amino acid content. Coconut meat is the endosperm of coconut that has nutritional value. It composes of carbohydrate, protein and fat. The objective of this study is utilization of by-products from the virgin coconut oil extraction process by using the skimmed coconut milk as a powder. The skimmed coconut milk was separated from the coconut milk in virgin coconut oil extraction process that consists approximately of protein 6.4%, carbohydrate 7.2%, dietary fiber 0.27 %, sugar 6.27%, fat 3.6 % and moisture content of 86.93%. This skimmed coconut milk can be made to powder for value - added product by using spray drying. The factors effect to the yield and properties of dry skimmed coconut milk in spraying process are inlet, outlet air temperature and the maltodextrin concentration. The percentage of maltodextrin content (15, 20%), outlet air temperature (80 ºC, 85 ºC, 90 ºC) and inlet air temperature (190 ºC, 200 ºC, 210 ºC) were conducted to the skimmed coconut milk spray drying process. The spray dryer was kept air flow rate (0.2698 m3 /s). The result that shown 2.22 -3.23% of moisture content, solubility, bulk density (0.4-0.67g/mL), solubility, wettability (4.04 -19.25 min) for solubility in the water, color, particle size were analyzed for the powder samples. The maximum yield (18.00%) of spray dried coconut milk powder was obtained at 210 °C of temperature, 80°C of outlet temperature and 20% maltodextrin for 27.27 second for drying time. For the amino analysis shown that the high amino acids are Glutamine (16.28%), Arginine (10.32%) and Glycerin (9.59%) by using HPLP method (UV detector).Keywords: skimmed coconut milk, spray drying, virgin coconut oil process (VCO), maltodextrin
Procedia PDF Downloads 3325807 Synthesis and Tribological Properties of the Al-Cr-N/MoS₂ Self-Lubricating Coatings by Hybrid Magnetron Sputtering
Authors: Tie-Gang Wang, De-Qiang Meng, Yan-Mei Liu
Abstract:
Ternary AlCrN coatings were widely used to prolong cutting tool life because of their high hardness and excellent abrasion resistance. However, the friction between the workpiece and cutter surface was increased remarkably during machining difficult-to-cut materials (such as superalloy, titanium, etc.). As a result, a lot of cutting heat was generated and cutting tool life was shortened. In this work, an appropriate amount of solid lubricant MoS₂ was added into the AlCrN coating to reduce the friction between the tool and the workpiece. A series of Al-Cr-N/MoS₂ self-lubricating coatings with different MoS₂ contents were prepared by high power impulse magnetron sputtering (HiPIMS) and pulsed direct current magnetron sputtering (Pulsed DC) compound system. The MoS₂ content in the coatings was changed by adjusting the sputtering power of the MoS₂ target. The composition, structure and mechanical properties of the Al-Cr-N/MoS2 coatings were systematically evaluated by energy dispersive spectrometer, scanning electron microscopy, X-ray photoelectron spectroscopy, X-ray diffractometer, nano-indenter tester, scratch tester, and ball-on-disk tribometer. The results indicated the lubricant content played an important role in the coating properties. As the sputtering power of the MoS₂ target was 0.1 kW, the coating possessed the highest hardness 14.1GPa, the highest critical load 44.8 N, and the lowest wear rate 4.4×10−3μm2/N.Keywords: self-lubricating coating, Al-Cr-N/MoS₂ coating, wear rate, friction coefficient
Procedia PDF Downloads 1325806 Robotic Logging Technology: The Future of Oil Well Logging
Authors: Nitin Lahkar, Rishiraj Goswami
Abstract:
“Oil Well Logging” or the practice of making a detailed record (a well log) of the geologic formations penetrated by a borehole is an important practice in the Oil and Gas industry. Although a lot of research has been undertaken in this field, some basic limitations still exist. One of the main arenas or venues where plethora of problems arises is in logistically challenged areas. Accessibility and availability of efficient manpower, resources and technology is very time consuming, restricted and often costly in these areas. So, in this regard, the main challenge is to decrease the Non Productive Time (NPT) involved in the conventional logging process. The thought for the solution to this problem has given rise to a revolutionary concept called the “Robotic Logging Technology”. Robotic logging technology promises the advent of successful logging in all kinds of wells and trajectories. It consists of a wireless logging tool controlled from the surface. This eliminates the need for the logging truck to be summoned which in turn saves precious rig time. The robotic logging tool here, is designed such that it can move inside the well by different proposed mechanisms and models listed in the full paper as TYPE A, TYPE B and TYPE C. These types are classified on the basis of their operational technology, movement and conditions/wells in which the tool is to be used. Thus, depending on subsurface conditions, energy sources available and convenience the TYPE of Robotic model will be selected. Advantages over Conventional Logging Techniques: Reduction in Non-Productive time, lesser energy requirements, very fast action as compared to all other forms of logging, can perform well in all kinds of well trajectories (vertical/horizontal/inclined).Keywords: robotic logging technology, innovation, geology, geophysics
Procedia PDF Downloads 3065805 Cardiovascular Modeling Software Tools in Medicine
Authors: J. Fernandez, R. Fernandez de Canete, J. Perea-Paizal, J. C. Ramos-Diaz
Abstract:
The high prevalence of cardiovascular diseases has provoked a raising interest in the development of mathematical models in order to evaluate the cardiovascular function both under physiological and pathological conditions. In this paper, a physical model of the cardiovascular system with intrinsic regulation is presented and implemented by using the object-oriented Modelica simulation software tools. For this task, a multi-compartmental system previously validated with physiological data has been built, based on the interconnection of cardiovascular elements such as resistances, capacitances and pumping among others, by following an electrohydraulic analogy. The results obtained under both physiological and pathological scenarios provide an easy interpretative key to analyze the hemodynamic behavior of the patient. The described approach represents a valuable tool in the teaching of physiology for graduate medical and nursing students among others.Keywords: cardiovascular system, MODELICA simulation software, physical modelling, teaching tool
Procedia PDF Downloads 3005804 Effect of Solvents in the Extraction and Stability of Anthocyanin from the Petals of Caesalpinia pulcherrima for Natural Dye-Sensitized Solar Cell
Authors: N. Prabavathy, R. Balasundaraprabhu, S. Shalini, Dhayalan Velauthapillai, S. Prasanna, N. Muthukumarasamy
Abstract:
Dye sensitized solar cell (DSSC) has become a significant research area due to their fundamental and scientific importance in the area of energy conversion. Synthetic dyes as sensitizer in DSSC are efficient and durable but they are costlier, toxic and have the tendency to degrade. Natural sensitizers contain plant pigments such as anthocyanin, carotenoid, flavonoid, and chlorophyll which promote light absorption as well as injection of charges to the conduction band of TiO2 through the sensitizer. But, the efficiency of natural dyes is not up to the mark mainly due to instability of the pigment such as anthocyanin. The stability issues in vitro are mainly due to the effect of solvents on extraction of anthocyanins and their respective pH. Taking this factor into consideration, in the present work, the anthocyanins were extracted from the flower Caesalpinia pulcherrima (C. pulcherrimma) with various solvents and their respective stability and pH values are discussed. The usage of citric acid as solvent to extract anthocyanin has shown good stability than other solvents. It also helps in enhancing the sensitization properties of anthocyanins with Titanium dioxide (TiO2) nanorods. The IPCE spectra show higher photovoltaic performance for dye sensitized TiO2nanorods using citric acid as solvent. The natural DSSC using citric acid as solvent shows a higher efficiency compared to other solvents. Hence citric acid performs to be a safe solvent for natural DSSC in boosting the photovoltaic performance and maintaining the stability of anthocyanins.Keywords: Caesalpinia pulcherrima, citric acid, dye sensitized solar cells, TiO₂ nanorods
Procedia PDF Downloads 2905803 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants
Authors: Zarina Chokparova, Ighor Uzhinsky
Abstract:
Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture
Procedia PDF Downloads 1415802 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 1135801 Application Quality Function Deployment (QFD) Tool in Design of Aero Pumps Based on System Engineering
Authors: Z. Soleymani, M. Amirzadeh
Abstract:
Quality Function Deployment (QFD) was developed in 1960 in Japan and introduced in 1983 in America and Europe. The paper presents a real application of this technique in a way that the method of applying QFD in design and production aero fuel pumps has been considered. While designing a product and in order to apply system engineering process, the first step is identification customer needs then its transition to engineering parameters. Since each change in deign after production process leads to extra human costs and also increase in products quality risk, QFD can make benefits in sale by meeting customer expectations. Since the needs identified as well, the use of QFD tool can lead to increase in communications and less deviation in design and production phases, finally it leads to produce the products with defined technical attributes.Keywords: customer voice, engineering parameters, gear pump, QFD
Procedia PDF Downloads 2495800 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System
Authors: John Lorenzo Bautista, Yoon-Joong Kim
Abstract:
This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition
Procedia PDF Downloads 4805799 The Studies of the Sorption Capabilities of the Porous Microspheres with Lignin
Authors: M. Goliszek, M. Sobiesiak, O. Sevastyanova, B. Podkoscielna
Abstract:
Lignin is one of three main constituents of biomass together with cellulose and hemicellulose. It is a complex biopolymer, which contains a large number of functional groups, including aliphatic and aromatic hydroxyl groups, carbohylic groups and methoxy groups in its structure, that is why it shows potential capacities for process of sorption. Lignin is a highly cross-linked polymer with a three-dimentional structure which can provide large surface area and pore volumes. It can also posses better dispersion, diffusion and mass transfer behavior in a field of the removal of, e.g., heavy-metal-ions or aromatic pollutions. In this work emulsion-suspension copolymerization method, to synthesize the porous microspheres of divinylbenzene (DVB), styrene (St) and lignin was used. There are also microspheres without the addition of lignin for comparison. Before the copolymerization, modification lignin with methacryloyl chloride, to improve its reactivity with other monomers was done. The physico-chemical properties of the obtained microspheres, e.g., pore structures (adsorption-desorption measurements), thermal properties (DSC), tendencies to swell and the actual shapes were also studied. Due to well-developed porous structure and the presence of functional groups our materials may have great potential in sorption processes. To estimate the sorption capabilities of the microspheres towards phenol and its chlorinated derivatives the off-line SPE (solid-phase extraction) method is going to be applied. This method has various advantages, including low-cost, easy to use and enables the rapid measurements for a large number of chemicals. The efficiency of the materials in removing phenols from aqueous solution and in desorption processes will be evaluated.Keywords: microspheres, lignin, sorption, solid-phase extraction
Procedia PDF Downloads 1835798 A Questionnaire-Based Survey: Therapists Response towards Upper Limb Disorder Learning Tool
Authors: Noor Ayuni Che Zakaria, Takashi Komeda, Cheng Yee Low, Kaoru Inoue, Fazah Akhtar Hanapiah
Abstract:
Previous studies have shown that there are arguments regarding the reliability and validity of the Ashworth and Modified Ashworth Scale towards evaluating patients diagnosed with upper limb disorders. These evaluations depended on the raters’ experiences. This initiated us to develop an upper limb disorder part-task trainer that is able to simulate consistent upper limb disorders, such as spasticity and rigidity signs, based on the Modified Ashworth Scale to improve the variability occurring between raters and intra-raters themselves. By providing consistent signs, novice therapists would be able to increase training frequency and exposure towards various levels of signs. A total of 22 physiotherapists and occupational therapists participated in the study. The majority of the therapists agreed that with current therapy education, they still face problems with inter-raters and intra-raters variability (strongly agree 54%; n = 12/22, agree 27%; n = 6/22) in evaluating patients’ conditions. The therapists strongly agreed (72%; n = 16/22) that therapy trainees needed to increase their frequency of training; therefore believe that our initiative to develop an upper limb disorder training tool will help in improving the clinical education field (strongly agree and agree 63%; n = 14/22).Keywords: upper limb disorder, clinical education tool, inter/intra-raters variability, spasticity, modified Ashworth scale
Procedia PDF Downloads 3105797 The Implementation of a Nurse-Driven Palliative Care Trigger Tool
Authors: Sawyer Spurry
Abstract:
Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).Keywords: palliative care, nursing, quality improvement, trigger tool
Procedia PDF Downloads 1945796 Device for Reversible Hydrogen Isotope Storage with Aluminum Oxide Ceramic Case
Authors: Igor P. Maximkin, Arkady A. Yukhimchuk, Victor V. Baluev, Igor L. Malkov, Rafael K. Musyaev, Damir T. Sitdikov, Alexey V. Buchirin, Vasily V. Tikhonov
Abstract:
Minimization of tritium diffusion leakage when developing devices handling tritium-containing media is key problems whose solution will at least allow essential enhancement of radiation safety and minimization of diffusion losses of expensive tritium. One of the ways to solve this problem is to use Al₂O₃ high-strength non-porous ceramics as a structural material of the bed body. This alumina ceramics offers high strength characteristics, but its main advantages are low hydrogen permeability (as against the used structural material) and high dielectric properties. The latter enables direct induction heating of an hydride-forming metal without essential heating of the pressure and containment vessel. The use of alumina ceramics and induction heating allows: - essential reduction of tritium extraction time; - several orders reduction of tritium diffusion leakage; - more complete extraction of tritium from metal hydrides due to its higher heating up to melting in the event of final disposal of the device. The paper presents computational and experimental results for the tritium bed designed to absorb 6 liters of tritium. Titanium was used as hydrogen isotope sorbent. Results of hydrogen realize kinetic from hydride-forming metal, strength and cyclic service life tests are reported. Recommendations are also provided for the practical use of the given bed type.Keywords: aluminum oxide ceramic, hydrogen pressure, hydrogen isotope storage, titanium hydride
Procedia PDF Downloads 4075795 Code – Switching in a Flipped Classroom for Foreign Students
Authors: E. Tutova, Y. Ebzeeva, L. Gishkaeva, Y.Smirnova, N. Dubinina
Abstract:
We have been working with students from different countries and found it crucial to switch the languages to explain something. Whether it is Russian, or Chinese, explaining in a different language plays an important role for students’ cognitive abilities. In this work we are going to explore how code switching may impact the student’s perception of information. Code-switching is a tool defined by linguists as a switch from one language to another for convenience, explanation of terms unavailable in an initial language or sometimes prestige. In our case, we are going to consider code-switching from the function of convenience. As a rule, students who come to study Russian in a language environment, lack many skills in speaking the language. Thus, it is made harder to explain the rules for them of another language, which is English. That is why switching between English, Russian and Mandarin is crucial for their better understanding. In this work we are going to explore the code-switching as a tool which can help a teacher in a flipped classroom.Keywords: bilingualism, psychological linguistics, code-switching, social linguistics
Procedia PDF Downloads 815794 Influence of the Cooking Technique on the Iodine Content of Frozen Hake
Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre
Abstract:
The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.Keywords: cooking process, ICP-MS, iodine, hake
Procedia PDF Downloads 1425793 Design of a Tool for Generating Test Cases from BPMN
Authors: Prat Yotyawilai, Taratip Suwannasart
Abstract:
Business Process Model and Notation (BPMN) is more important in the business process and creating functional models, and is a standard for OMG, which becomes popular in various organizations and in education. Researches related to software testing based on models are prominent. Although most researches use the UML model in software testing, not many researches use the BPMN Model in creating test cases. Therefore, this research proposes a design of a tool for generating test cases from the BPMN. The model is analyzed and the details of the various components are extracted before creating a flow graph. Both details of components and the flow graph are used in generating test cases.Keywords: software testing, test case, BPMN, flow graph
Procedia PDF Downloads 5555792 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 214