Search results for: neural machine translation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4522

Search results for: neural machine translation

562 Mapping Structurally Significant Areas of G-CSF during Thermal Degradation with NMR

Authors: Mark-Adam Kellerman

Abstract:

Proteins are capable of exploring vast mutational spaces. This makes it difficult for protein engineers to devise rational methods to improve stability and function via mutagenesis. Deciding which residues to mutate requires knowledge of the characteristics they elicit. We probed the characteristics of residues in granulocyte-colony stimulating factor (G-CSF) using a thermal melt (from 295K to 323K) to denature it in a 700 MHz Bruker spectrometer. These characteristics included dynamics, micro-environmental changes experienced/ induced during denaturing and structure-function relationships. 15N-1H HSQC experiments were performed at 2K increments along with this thermal melt. We observed that dynamic residues that also undergo a lot of change in their microenvironment were predominantly in unstructured regions. Moreover, we were able to identify four residues (G4, A6, T133 and Q134) that we class as high priority targets for mutagenesis, given that they all appear in both the top 10% of measures for environmental changes and dynamics (∑Δ and ∆PI). We were also able to probe these NMR observables and combine them with molecular dynamics (MD) to elucidate what appears to be an opening motion of G-CSFs binding site III. V48 appears to be pivotal to this opening motion, which also seemingly distorts the loop region between helices A and B. This observation is in agreement with previous findings that the conformation of this loop region becomes altered in an aggregation-prone state of G-CSF. Hence, we present here an approach to profile the characteristics of residues in order to highlight their potential as rational mutagenesis targets and their roles in important conformational changes. These findings present not only an opportunity to effectively make biobetters, but also open up the possibility to further understand epistasis and machine learn residue behaviours.

Keywords: protein engineering, rational mutagenesis, NMR, molecular dynamics

Procedia PDF Downloads 238
561 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception

Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut

Abstract:

What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.

Keywords: time perception, entropy, temporal illusions, duration perception

Procedia PDF Downloads 152
560 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy

Procedia PDF Downloads 277
559 Design and Manufacture of a Hybrid Gearbox Reducer System

Authors: Ahmed Mozamel, Kemal Yildizli

Abstract:

Due to mechanical energy losses and a competitive of minimizing these losses and increases the machine efficiency, the need for contactless gearing system has raised. In this work, one stage of mechanical planetary gear transmission system integrated with one stage of magnetic planetary gear system is designed as a two-stage hybrid gearbox system. The permanent magnets internal energy in the form of the magnetic field is used to create meshing between contactless magnetic rotors in order to provide self-system protection against overloading and decrease the mechanical loss of the transmission system by eliminating the friction losses. Classical methods, such as analytical, tabular method and the theory of elasticity are used to calculate the planetary gear design parameters. The finite element method (ANSYS Maxwell) is used to predict the behaviors of a magnetic gearing system. The concentric magnetic gearing system has been modeled and analyzed by using 2D finite element method (ANSYS Maxwell). In addition to that, design and manufacturing processes of prototype components (a planetary gear, concentric magnetic gear, shafts and the bearings selection) of a gearbox system are investigated. The output force, the output moment, the output power and efficiency of the hybrid gearbox system are experimentally evaluated. The viability of applying a magnetic force to transmit mechanical power through a non-contact gearing system is presented. The experimental test results show that the system is capable to operate continuously within the range of speed from 400 rpm to 3000 rpm with the reduction ratio of 2:1 and maximum efficiency of 91%.

Keywords: hybrid gearbox, mechanical gearboxes, magnetic gears, magnetic torque

Procedia PDF Downloads 137
558 Towards Law Data Labelling Using Topic Modelling

Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran

Abstract:

The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.

Keywords: courts of accounts, data labelling, document similarity, topic modeling

Procedia PDF Downloads 159
557 Artificial Cells Capable of Communication by Using Polymer Hydrogel

Authors: Qi Liu, Jiqin Yao, Xiaohu Zhou, Bo Zheng

Abstract:

The first artificial cell was produced by Thomas Chang in the 1950s when he was trying to make a mimic of red blood cells. Since then, many different types of artificial cells have been constructed from one of the two approaches: a so-called bottom-up approach, which aims to create a cell from scratch, and a top-down approach, in which genes are sequentially knocked out from organisms until only the minimal genome required for sustaining life remains. In this project, bottom-up approach was used to build a new cell-free expression system which mimics artificial cell that capable of protein expression and communicate with each other. The artificial cells constructed from the bottom-up approach are usually lipid vesicles, polymersomes, hydrogels or aqueous droplets containing the nucleic acids and transcription-translation machinery. However, lipid vesicles based artificial cells capable of communication present several issues in the cell communication research: (1) The lipid vesicles normally lose the important functions such as protein expression within a few hours. (2) The lipid membrane allows the permeation of only small molecules and limits the types of molecules that can be sensed and released to the surrounding environment for chemical communication; (3) The lipid vesicles are prone to rupture due to the imbalance of the osmotic pressure. To address these issues, the hydrogel-based artificial cells were constructed in this work. To construct the artificial cell, polyacrylamide hydrogel was functionalized with Acrylate PEG Succinimidyl Carboxymethyl Ester (ACLT-PEG2000-SCM) moiety on the polymer backbone. The proteinaceous factors can then be immobilized on the polymer backbone by the reaction between primary amines of proteins and N-hydroxysuccinimide esters (NHS esters) of ACLT-PEG2000-SCM, the plasmid template and ribosome were encapsulated inside the hydrogel particles. Because the artificial cell could continuously express protein with the supply of nutrients and energy, the artificial cell-artificial cell communication and artificial cell-natural cell communication could be achieved by combining the artificial cell vector with designed plasmids. The plasmids were designed referring to the quorum sensing (QS) system of bacteria, which largely relied on cognate acyl-homoserine lactone (AHL) / transcription pairs. In one communication pair, “sender” is the artificial cell or natural cell that can produce AHL signal molecule by synthesizing the corresponding signal synthase that catalyzed the conversion of S-adenosyl-L-methionine (SAM) into AHL, while the “receiver” is the artificial cell or natural cell that can sense the quorum sensing signaling molecule form “sender” and in turn express the gene of interest. In the experiment, GFP was first immobilized inside the hydrogel particle to prove that the functionalized hydrogel particles could be used for protein binding. After that, the successful communication between artificial cell-artificial cell and artificial cell-natural cell was demonstrated, the successful signal between artificial cell-artificial cell or artificial cell-natural cell could be observed by recording the fluorescence signal increase. The hydrogel-based artificial cell designed in this work can help to study the complex communication system in bacteria, it can also be further developed for therapeutic applications.

Keywords: artificial cell, cell-free system, gene circuit, synthetic biology

Procedia PDF Downloads 130
556 Computer Aided Shoulder Prosthesis Design and Manufacturing

Authors: Didem Venus Yildiz, Murat Hocaoglu, Murat Dursun, Taner Akkan

Abstract:

The shoulder joint is a more complex structure than the hip or knee joints. In addition to the overall complexity of the shoulder joint, two different factors influence the insufficient outcome of shoulder replacement: the shoulder prosthesis design is far from fully developed and it is difficult to place these shoulder prosthesis due to shoulder anatomy. The glenohumeral joint is the most complex joint of the human shoulder. There are various treatments for shoulder failures such as total shoulder arthroplasty, reverse total shoulder arthroplasty. Due to its reverse design than normal shoulder anatomy, reverse total shoulder arthroplasty has different physiological and biomechanical properties. Post-operative achievement of this arthroplasty is depend on improved design of reverse total shoulder prosthesis. Designation achievement can be increased by several biomechanical and computational analysis. In this study, data of human both shoulders with right side fracture was collected by 3D Computer Tomography (CT) machine in dicom format. This data transferred to 3D medical image processing software (Mimics Materilise, Leuven, Belgium) to reconstruct patient’s left and right shoulders’ bones geometry. Provided 3D geometry model of the fractured shoulder was used to constitute of reverse total shoulder prosthesis by 3-matic software. Finite element (FE) analysis was conducted for comparison of intact shoulder and prosthetic shoulder in terms of stress distribution and displacements. Body weight physiological reaction force of 800 N loads was applied. Resultant values of FE analysis was compared for both shoulders. The analysis of the performance of the reverse shoulder prosthesis could enhance the knowledge of the prosthetic design.

Keywords: reverse shoulder prosthesis, biomechanics, finite element analysis, 3D printing

Procedia PDF Downloads 142
555 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin

Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji

Abstract:

Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.

Keywords: dry compressive strength, kaolin, modeling, sodium silicate

Procedia PDF Downloads 441
554 Stainless Steel Swarfs for Replacement of Copper in Non-Asbestos Organic Brake-Pads

Authors: Vishal Mahale, Jayashree Bijwe, Sujeet K. Sinha

Abstract:

Nowadays extensive research is going on in the field of friction materials (FMs) for development of eco-friendly brake-materials by removing copper as it is a proven threat to the aquatic organisms. Researchers are keen to find the solution for copper-free FMs by using different metals or without metals. Steel wool is used as a reinforcement in non-asbestos organic (NAO) FMs mainly for increasing thermal conductivity, and it affects wear adversely, most of the times and also adds friction fluctuations. Copper and brass used to be the preferred choices because of superior performance in almost every aspect except cost. Since these are being phased out because of a proven threat to the aquatic life. Keeping this in view, a series of realistic multi-ingredient FMs containing stainless steel (SS) swarfs as a theme ingredient in increasing amount (0, 5, 10 and 15 wt. %- S₅, S₁₀, and S₁₅) were developed in the form of brake-pads. One more composite containing copper instead of SS swarfs (C₁₀) was developed. These composites were characterized for physical, mechanical, chemical and tribological performance. Composites were tribo-evaluated on a chase machine with various test loops as per SAE J661 standards. Various performance parameters such as normal µ, hot µ, performance µ, fade µ, recovery µ, % fade, % recovery, wear resistance, etc. were used to evaluate the role of amount of SS swarfs in FMs. It was concluded that SS swarfs proved successful in Cu replacement almost in all respects except wear resistance. With increase in amount of SS swarfs, most of the properties improved. Worn surface analysis and wear mechanism were studied using SEM and EDAX techniques.

Keywords: Chase type friction tester, copper-free, non-asbestos organic (NAO) friction materials, stainless steel swarfs

Procedia PDF Downloads 175
553 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes

Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola

Abstract:

In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.

Keywords: breeding soundness, rabbit, radiography, ultrasonography

Procedia PDF Downloads 115
552 The Role of Hypothalamus Mediators in Energy Imbalance

Authors: Maftunakhon Latipova, Feruza Khaydarova

Abstract:

Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.

Keywords: obesity, endocrinology, hypothalamus, overweight

Procedia PDF Downloads 58
551 Determination of Medians of Biochemical Maternal Serum Markers in Healthy Women Giving Birth to Normal Babies

Authors: Noreen Noreen, Aamir Ijaz, Hamza Akhtar

Abstract:

Background: Screening plays a major role to detect chromosomal abnormalities, Down syndrome, neural tube defects and other inborn diseases of the newborn. Serum biomarkers in the second trimester are useful in determining risk of most common chromosomal anomalies; these test include Alpha-fetoprotein (AFP), Human chorionic gonadotropin (hCG), Unconjugated Oestriol (UEȝ)and inhibin-A. Quadruple biomarkers are worth test in diagnosing the congenital pathology during pregnancy, these procedures does not form a part of routine health care of pregnant women in Pakistan, so the median value is lacking for population in Pakistan. Objective: To determine median values of biochemical maternal serum markers in local population during second trimester maternal screening. Study settings: Department of Chemical Pathology and Endocrinology, Armed Forces Institute of Pathology (AFIP) Rawalpindi. Methods: Cross-Sectional study for estimation of reference values. Non-probability consecutive sampling, 155 healthy pregnant women, of 30-40 years of age, will be included. As non-parametric statistics will be used, the minimum sample size is 120. Result: Total 155 women were enrolled into this study. The age of all women enrolled ranged from 30 to39 yrs. Among them, 39 per cent of women were less than 34 years. Mean maternal age 33.46±2.35 SD and maternal body weight were 54.98±2.88. Median value of quadruple markers calculated from 15-18th week of gestation that will be used for calculation of MOM for screening of trisomy21 in this gestational age. Median value at 15 week of gestation were observed hCG 36650 mIU/ml, AFP 23.3 IU/ml, UEȝ 3.5 nmol/L, InhibinA 198 ng/L, at 16 week of gestation hCG 29050 mIU/ml, AFP 35.4 IU/ml, UEȝ 4.1 nmol/L, InhibinA 179 ng/L, at 17 week of gestation hCG 28450 mIU/ml, AFP 36.0 IU/ml, UEȝ 6.7 nmol/L, InhibinA 176 ng/L and at 18 week of gestation hCG 25200 mIU/ml, AFP 38.2 IU/ml, UEȝ 8.2 nmol/L, InhibinA 190 ng/L respectively.All the comparisons were significant (p-Value <0.005) with 95% confidence Interval (CI) and level of significance of study set by going through literature and set at 5%. Conclusion: The median values for these four biomarkers in Pakistani pregnant women can be used to calculate MoM.

Keywords: screening, down syndrome, quadruple test, second trimester, serum biomarkers

Procedia PDF Downloads 162
550 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 203
549 Analysis of the Cutting Force with Ultrasonic Assisted Manufacturing of Steel (S235JR)

Authors: Philipp Zopf, Franz Haas

Abstract:

Manufacturing of very hard and refractory materials like ceramics, glass or carbide poses particular challenges on tools and machines. The company Sauer GmbH developed especially for this application area ultrasonic tool holders working in a frequency range from 15 to 60 kHz and superimpose the common tool movement in the vertical axis. This technique causes a structural weakening in the contact area and facilitates the machining. The possibility of the force reduction for these special materials especially in drilling of carbide with diamond tools up to 30 percent made the authors try to expand the application range of this method. To make the results evaluable, the authors decide to start with existing processes in which the positive influence of the ultrasonic assistance is proven to understand the mechanism. The comparison of a grinding process the Institute use to machine materials mentioned in the beginning and steel could not be more different. In the first case, the authors use tools with geometrically undefined edges. In the second case, the edges are geometrically defined. To get valid results of the tests, the authors decide to investigate two manufacturing methods, drilling and milling. The main target of the investigation is to reduce the cutting force measured with a force measurement platform underneath the workpiece. Concerning to the direction of the ultrasonic assistance, the authors expect lower cutting forces and longer endurance of the tool in the drilling process. To verify the frequencies and the amplitudes an FFT-analysis is performed. It shows the increasing damping depending on the infeed rate of the tool. The reducing of amplitude of the cutting force comes along.

Keywords: drilling, machining, milling, ultrasonic

Procedia PDF Downloads 258
548 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 390
547 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 236
546 Design and Implementation of Control System in Underwater Glider of Ganeshblue

Authors: Imam Taufiqurrahman, Anugrah Adiwilaga, Egi Hidayat, Bambang Riyanto Trilaksono

Abstract:

Autonomous Underwater Vehicle glider is one of the renewal of underwater vehicles. This vehicle is one of the autonomous underwater vehicles that are being developed in Indonesia. Glide ability is obtained by controlling the buoyancy and attitude of the vehicle using the movers within the vehicle. The glider motion mechanism is expected to provide energy resistance from autonomous underwater vehicles so as to increase the cruising range of rides while performing missions. The control system on the vehicle consists of three parts: controlling the attitude of the pitch, the buoyancy engine controller and the yaw controller. The buoyancy and pitch controls on the vehicle are sequentially referring to the finite state machine with pitch angle and depth of diving inputs to obtain a gliding cycle. While the yaw control is done through the rudder for the needs of the guide system. This research is focused on design and implementation of control system of Autonomous Underwater Vehicle glider based on PID anti-windup. The control system is implemented on an ARM TS-7250-V2 device along with a mathematical model of the vehicle in MATLAB using the hardware-in-the-loop simulation (HILS) method. The TS-7250-V2 is chosen because it complies industry standards, has high computing capability, minimal power consumption. The results show that the control system in HILS process can form glide cycle with depth and angle of operation as desired. In the implementation using half control and full control mode, from the experiment can be concluded in full control mode more precision when tracking the reference. While half control mode is considered more efficient in carrying out the mission.

Keywords: control system, PID, underwater glider, marine robotics

Procedia PDF Downloads 358
545 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 251
544 Fabrication of Cheap Novel 3d Porous Scaffolds Activated by Nano-Particles and Active Molecules for Bone Regeneration and Drug Delivery Applications

Authors: Mostafa Mabrouk, Basma E. Abdel-Ghany, Mona Moaness, Bothaina M. Abdel-Hady, Hanan H. Beherei

Abstract:

Tissue engineering became a promising field for bone repair and regenerative medicine in which cultured cells, scaffolds and osteogenic inductive signals are used to regenerate tissues. The annual cost of treating bone defects in Egypt has been estimated to be many billions, while enormous costs are spent on imported bone grafts for bone injuries, tumors, and other pathologies associated with defective fracture healing. The current study is aimed at developing a more strategic approach in order to speed-up recovery after bone damage. This will reduce the risk of fatal surgical complications and improve the quality of life of people affected with such fractures. 3D scaffolds loaded with cheap nano-particles that possess an osteogenic effect were prepared by nano-electrospinning. The Microstructure and morphology characterizations of the 3D scaffolds were monitored using scanning electron microscopy (SEM). The physicochemical characterization was investigated using X-ray diffractometry (XRD) and infrared spectroscopy (IR). The Physicomechanical properties of the 3D scaffold were determined by a universal testing machine. The in vitro bioactivity of the 3D scaffold was assessed in simulated body fluid (SBF). The bone-bonding ability of novel 3D scaffolds was also evaluated. The obtained nanofibrous scaffolds demonstrated promising microstructure, physicochemical and physicomechanical features appropriate for enhanced bone regeneration. Therefore, the utilized nanomaterials loaded with the drug are greatly recommended as cheap alternatives to growth factors.

Keywords: bone regeneration, cheap scaffolds, nanomaterials, active molecules

Procedia PDF Downloads 173
543 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 57
542 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 174
541 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique

Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu

Abstract:

Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.

Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing

Procedia PDF Downloads 78
540 Vehicle Speed Estimation Using Image Processing

Authors: Prodipta Bhowmik, Poulami Saha, Preety Mehra, Yogesh Soni, Triloki Nath Jha

Abstract:

In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time.

Keywords: OpenCV, Haar Cascade classifier, DLIB, YOLOV3, centroid tracker, vehicle detection, vehicle tracking, vehicle speed estimation, computer vision

Procedia PDF Downloads 64
539 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 86
538 Evaluating the Satisfaction of Chinese Consumers toward Influencers at TikTok

Authors: Noriyuki Suyama

Abstract:

The progress and spread of digitalization have led to the provision of a variety of new services. The recent progress in digitization can be attributed to rapid developments in science and technology. First, the research and diffusion of artificial intelligence (AI) has made dramatic progress. Around 2000, the third wave of AI research, which had been underway for about 50 years, arrived. Specifically, machine learning and deep learning were made possible in AI, and the ability of AI to acquire knowledge, define the knowledge, and update its own knowledge in a quantitative manner made the use of big data practical even for commercial PCs. On the other hand, with the spread of social media, information exchange has become more common in our daily lives, and the lending and borrowing of goods and services, in other words, the sharing economy, has become widespread. The scope of this trend is not limited to any industry, and its momentum is growing as the SDGs take root. In addition, the Social Network Service (SNS), a part of social media, has brought about the evolution of the retail business. In the past few years, social network services (SNS) involving users or companies have especially flourished. The People's Republic of China (hereinafter referred to as "China") is a country that is stimulating enormous consumption through its own unique SNS, which is different from the SNS used in developed countries around the world. This paper focuses on the effectiveness and challenges of influencer marketing by focusing on the influence of influencers on users' behavior and satisfaction with Chinese SNSs. Specifically, Conducted was the quantitative survey of Tik Tok users living in China, with the aim of gaining new insights from the analysis and discussions. As a result, we found several important findings and knowledge.

Keywords: customer satisfaction, social networking services, influencer marketing, Chinese consumers’ behavior

Procedia PDF Downloads 81
537 Urinary Volatile Organic Compound Testing in Fast-Track Patients with Suspected Colorectal Cancer

Authors: Godwin Dennison, C. E. Boulind, O. Gould, B. de Lacy Costello, J. Allison, P. White, P. Ewings, A. Wicaksono, N. J. Curtis, A. Pullyblank, D. Jayne, J. A. Covington, N. Ratcliffe, N. K. Francis

Abstract:

Background: Colorectal symptoms are common but only infrequently represent serious pathology, including colorectal cancer (CRC). A large number of invasive tests are presently performed for reassurance. We investigated the feasibility of urinary volatile organic compound (VOC) testing as a potential triage tool in patients fast-tracked for assessment for possible CRC. Methods: A prospective, multi-centre, observational feasibility study was performed across three sites. Patients referred on NHS fast-track pathways for potential CRC provided a urine sample which underwent Gas Chromatography Mass Spectrometry (GC-MS), Field Asymmetric Ion Mobility Spectrometry (FAIMS) and Selected Ion Flow Tube Mass Spectrometry (SIFT-MS) analysis. Patients underwent colonoscopy and/or CT colonography and were grouped as either CRC, adenomatous polyp(s), or controls to explore the diagnostic accuracy of VOC output data supported by an artificial neural network (ANN) model. Results: 558 patients participated with 23 (4.1%) CRC diagnosed. 59% of colonoscopies and 86% of CT colonographies showed no abnormalities. Urinary VOC testing was feasible, acceptable to patients, and applicable within the clinical fast track pathway. GC-MS showed the highest clinical utility for CRC and polyp detection vs. controls (sensitivity=0.878, specificity=0.882, AUROC=0.884). Conclusion: Urinary VOC testing and analysis are feasible within NHS fast-track CRC pathways. Clinically meaningful differences between patients with cancer, polyps, or no pathology were identified therefore suggesting VOC analysis may have future utility as a triage tool. Acknowledgment: Funding: NIHR Research for Patient Benefit grant (ref: PB-PG-0416-20022).

Keywords: colorectal cancer, volatile organic compound, gas chromatography mass spectrometry, field asymmetric ion mobility spectrometry, selected ion flow tube mass spectrometry

Procedia PDF Downloads 79
536 The Development of a Nanofiber Membrane for Outdoor and Activity Related Purposes

Authors: Roman Knizek, Denisa Knizkova

Abstract:

This paper describes the development of a nanofiber membrane for sport and outdoor use at the Technical University of Liberec (TUL) and the following cooperation with a private Czech company which launched this product onto the market. For making this membrane, Polyurethan was electrospun on the Nanospider spinning machine, and a wire string electrode was used. The created nanofiber membrane with a nanofiber diameter of 150 nm was subsequently hydrophobisied using a low vacuum plasma and Fluorocarbon monomer C6 type. After this hydrophobic treatment, the nanofiber membrane contact angle was higher than 125o, and its oleophobicity was 6. The last step was a lamination of this nanofiber membrane with a woven or knitted fabric to create a 3-layer laminate. Gravure printing technology and polyurethane hot-melt adhesive were used. The gravure roller has a mesh of 17. The resulting 3-layer laminate has a water vapor permeability Ret of 1.6 [Pa.m2.W-1] (– measured in compliance with ISO 11092), it is 100% windproof (– measured in compliance with ISO 9237), and the water column is above 10 000 mm (– measured in compliance with ISO 20811). This nanofiber membrane which was developed in the laboratories of the Technical University of Liberec was then produced industrially by a private company. A low vacuum plasma line and a lamination line were needed for industrial production, and the process had to be fine-tuned to achieve the same parameters as those achieved in the TUL laboratories. The result of this work is a newly developed nanofiber membrane which offers much better properties, especially water vapor permeability, than other competitive membranes. It is an example of product development and the consequent fine-tuning for industrial production; it is also an example of the cooperation between a Czech state university and a private company.

Keywords: nanofiber membrane, start-up, state university, private company, product

Procedia PDF Downloads 123
535 Potential Use of Leaching Gravel as a Raw Material in the Preparation of Geo Polymeric Material as an Alternative to Conventional Cement Materials

Authors: Arturo Reyes Roman, Daniza Castillo Godoy, Francisca Balarezo Olivares, Francisco Arriagada Castro, Miguel Maulen Tapia

Abstract:

Mining waste–based geopolymers are a sustainable alternative to conventional cement materials due to their contribution to the valorization of mining wastes as well as to the new construction materials with reduced fingerprints. The objective of this study was to determine the potential of leaching gravel (LG) from hydrometallurgical copper processing to be used as a raw material in the manufacture of geopolymer. NaOH, Na2SiO3 (modulus 1.5), and LG were mixed and then wetted with an appropriate amount of tap water, then stirred until a homogenous paste was obtained. A liquid/solid ratio of 0.3 was used for preparing mixtures. The paste was then cast in cubic moulds of 50 mm for the determination of compressive strengths. The samples were left to dry for 24h at room temperature, then unmoulded before analysis after 28 days of curing time. The compressive test was conducted in a compression machine (15/300 kN). According to the laser diffraction spectroscopy (LDS) analysis, 90% of LG particles were below 500 μm. The X-ray diffraction (XRD) analysis identified crystalline phases of albite (30 %), Quartz (16%), Anorthite (16 %), and Phillipsite (14%). The X-ray fluorescence (XRF) determinations showed mainly 55% of SiO2, 13 % of Al2O3, and 9% of CaO. ICP (OES) concentrations of Fe, Ca, Cu, Al, As, V, Zn, Mo, and Ni were 49.545; 24.735; 6.172; 14.152, 239,5; 129,6; 41,1;15,1, and 13,1 mg kg-1, respectively. The geopolymer samples showed resistance ranging between 2 and 10 MPa. In comparison with the raw material composition, the amorphous percentage of materials in the geopolymer was 35 %, whereas the crystalline percentage of main mineral phases decreased. Further studies are needed to find the optimal combinations of materials to produce a more resistant and environmentally safe geopolymer. Particularly are necessary compressive resistance higher than 15 MPa are necessary to be used as construction unit such as bricks.

Keywords: mining waste, geopolymer, construction material, alkaline activation

Procedia PDF Downloads 86
534 Maritime English Communication Training for Japanese VTS Operators in the Congested Area Including the Narrow Channel of Akashi Strait

Authors: Kenji Tanaka, Kazumi Sugita, Yuto Mizushima

Abstract:

This paper introduces a noteworthy form of English communication training for the officers and operators of the Osaka-Bay Marine Traffic Information Service (Osaka MARTIS) of the Japan Coast Guard working in the congested area at the Akashi Strait in Hyogo Prefecture, Japan. The authors of this paper, Marine Technical College’s (MTC) English language instructors, have been holding about forty lectures and exercises in basic and normal Maritime English (ME) for several groups of MARTIS personnel at Osaka MARTIS annually since they started the training in 2005. Trainees are expected to be qualified Maritime Third-Class Radio Operators who are responsible for providing safety information to a daily average of seven to eight hundred vessels that pass through the Akashi Strait, one of Japan’s narrowest channels. As of 2022, the instructors are conducting 55 remote lessons at MARTIS. One lesson is 90 minutes long. All 26 trainees are given oral and written assessments. The trainees need to pass the examination to become qualified operators every year, requiring them to train and maintain their linguistic levels even during the pandemic of Corona Virus Disease-19 (COVID-19). The vessel traffic information provided by Osaka MARTIS in Maritime English language is essential to the work involving the use of very high frequency (VHF) communication between MARTIS and vessels in the area. ME is the common language mainly used on board merchant, fishing, and recreational vessels, normally at sea. ME was edited and recommended by the International Maritime Organization in the 1970s, was revised in 2002, and has undergone continual revision. The vessel’s circumstances are much more serious at the strait than those at the open sea, so these vessels need ME to receive guidance from the center when passing through the narrow strait. The imminent and challenging situations at the strait necessitate that textbooks’ contents include the basics of the phrase book for seafarers as well as specific and additional navigational information, pronunciation exercises, notes on keywords and phrases, explanations about collocations, sample sentences, and explanations about the differences between synonyms especially those focusing on terminologies necessary for passing through the strait. Additionally, short Japanese-English translation quizzes about these topics, as well as prescribed readings about the maritime sector, are include in the textbook. All of these exercises have been trained in the remote education system since the outbreak of COVID-19. According to the guidelines of ME edited in 2009, the lowest level necessary for seafarers is B1 (lower individual users) of The Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR). Therefore, this vocational ME language training at Osaka MARTIS aims for its trainees to communicate at levels higher than B1. A noteworthy proof of improvement from this training is that most of the trainees have become qualified marine radio communication officers.

Keywords: akashi strait, B1 of CEFR, maritime english communication training, osaka martis

Procedia PDF Downloads 108
533 The Effect of Acute Toxicity and Thyroid Hormone Treatments on Hormonal Changes during Embryogenesis of Acipenser persicus

Authors: Samaneh Nazeri, Bagher Mojazi Amiri, Hamid Farahmand

Abstract:

Production of high quality fish eggs with reasonable hatching rate makes a success in aquaculture industries. It is influenced by the environmental stimulators and inhibitors. Diazinon is a widely-used pesticide in Golestan province (Southern Caspian Sea, North of Iran) which is washed to the aquatic environment (3 mg/L in the river). It is little known about the effect of this pesticide on the embryogenesis of sturgeon fish, the valuable species of the Caspian Sea. Hormonal content of the egg is an important factor to guaranty the successful passes of embryonic stages. In this study, the fate of Persian sturgeon embryo to 24, 48, 72, and 96-hours exposure of diazinon (LC50 dose) was tested. Also, the effect of thyroid hormones (T3 and T4) on these embryos was tested concurrently or separately with diazinon LC 50 dose. Fertilized eggs are exposed to T3 (low dose: 1 ng/ml, high dose: 10 ng/ml), T4 (low dose: 1 ng/ml, high dose: 10 ng/ml). Six eggs were randomly selected from each treatment (with three replicates) in five developmental stages (two cell- division, neural, heart present, heart beaten, and hatched larvae). The possibility of changing T3, T4, and cortisol contents of the embryos were determined in all treated groups and in every mentioned embryonic stage. The hatching rate in treated groups was assayed at the end of the embryogenesis to clarify the effect of thyroid hormones and diazinon. The results indicated significant differences in thyroid hormone contents, but no significant differences were recognized in cortisol levels at various early life stages of embryos. There was also significant difference in thyroid hormones in (T3, T4) + diazinon treated embryos (P˂0.05), while no significant difference between control and treatments in cortisol levels was observed. The highest hatching rate was recorded in HT3 treatment, while the lowest hatching rate was recorded for diazinon LC50 treatment. The result confirmed that Persian sturgeon embryo is less sensitive to diazinon compared to teleost embryos, and thyroid hormones may increase hatching rate even in the presence of diazinon.

Keywords: Persian sturgeon, diazinon, thyroid hormones, cortisol, embryo

Procedia PDF Downloads 289