Search results for: input processing
5006 Production and Distribution Network Planning Optimization: A Case Study of Large Cement Company
Authors: Lokendra Kumar Devangan, Ajay Mishra
Abstract:
This paper describes the implementation of a large-scale SAS/OR model with significant pre-processing, scenario analysis, and post-processing work done using SAS. A large cement manufacturer with ten geographically distributed manufacturing plants for two variants of cement, around 400 warehouses serving as transshipment points, and several thousand distributor locations generating demand needed to optimize this multi-echelon, multi-modal transport supply chain separately for planning and allocation purposes. For monthly planning as well as daily allocation, the demand is deterministic. Rail and road networks connect any two points in this supply chain, creating tens of thousands of such connections. Constraints include the plant’s production capacity, transportation capacity, and rail wagon batch size constraints. Each demand point has a minimum and maximum for shipments received. Price varies at demand locations due to local factors. A large mixed integer programming model built using proc OPTMODEL decides production at plants, demand fulfilled at each location, and the shipment route to demand locations to maximize the profit contribution. Using base SAS, we did significant pre-processing of data and created inputs for the optimization. Using outputs generated by OPTMODEL and other processing completed using base SAS, we generated several reports that went into their enterprise system and created tables for easy consumption of the optimization results by operations.Keywords: production planning, mixed integer optimization, network model, network optimization
Procedia PDF Downloads 675005 Geometallurgy of Niobium Deposits: An Integrated Multi-Disciplined Approach
Authors: Mohamed Nasraoui
Abstract:
Spatial ore distribution, ore heterogeneity and their links with geological processes involved in Niobium concentration are all factors for consideration when bridging field observations to extraction scheme. Indeed, mineralogy changes of Nb-hosting phases, their textural relationships with hydrothermal or secondary minerals, play a key control over mineral processing. This study based both on filed work and ore characterization presents data from several Nb-deposits related to carbonatite complexes. The results obtained by a wide range of analytical techniques, including, XRD, XRF, ICP-MS, SEM, Microprobe, Spectro-CL, FTIR-DTA and Mössbauer spectroscopy, demonstrate how geometallurgical assessment, at all stage of mine development, can greatly assist in the design of a suitable extraction flowsheet and data reconciliation.Keywords: carbonatites, Nb-geometallurgy, Nb-mineralogy, mineral processing.
Procedia PDF Downloads 1655004 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications
Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna
Abstract:
Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality
Procedia PDF Downloads 1245003 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment
Authors: Fares Laouacheria, Said Kechida, Moncef Chabi
Abstract:
The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model
Procedia PDF Downloads 2785002 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model
Authors: Seydou Sinde
Abstract:
The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression
Procedia PDF Downloads 845001 A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains
Authors: Hemad Zareiforoush, Saeed Minaei, Ahmad Banakar, Mohammad Reza Alizadeh
Abstract:
The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%.Keywords: machine vision, fuzzy logic, rice, quality
Procedia PDF Downloads 4195000 Making of Alloy Steel by Direct Alloying with Mineral Oxides during Electro-Slag Remelting
Authors: Vishwas Goel, Kapil Surve, Somnath Basu
Abstract:
In-situ alloying in steel during the electro-slag remelting (ESR) process has already been achieved by the addition of necessary ferroalloys into the electro-slag remelting mold. However, the use of commercially available ferroalloys during ESR processing is often found to be financially less favorable, in comparison with the conventional alloying techniques. However, a process of alloying steel with elements like chromium and manganese using the electro-slag remelting route is under development without any ferrochrome addition. The process utilizes in-situ reduction of refined mineral chromite (Cr₂O₃) and resultant enrichment of chromium in the steel ingot produced. It was established in course of this work that this process can become more advantageous over conventional alloying techniques, both economically and environmentally, for applications which inherently demand the use of the electro-slag remelting process, such as manufacturing of superalloys. A key advantage is the lower overall CO₂ footprint of this process relative to the conventional route of production, storage, and the addition of ferrochrome. In addition to experimentally validating the feasibility of the envisaged reactions, a mathematical model to simulate the reduction of chromium (III) oxide and transfer to chromium to the molten steel droplets was also developed as part of the current work. The developed model helps to correlate the amount of chromite input and the magnitude of chromium alloying that can be achieved through this process. Experiments are in progress to validate the predictions made by this model and to fine-tune its parameters.Keywords: alloying element, chromite, electro-slag remelting, ferrochrome
Procedia PDF Downloads 2234999 Purpose-Driven Collaborative Strategic Learning
Authors: Mingyan Hong, Shuozhao Hou
Abstract:
Collaborative Strategic Learning (CSL) teaches students to use learning strategies while working cooperatively. Student strategies include the following steps: defining the learning task and purpose; conducting ongoing negotiation of the learning materials by deciding "click" (I get it and I can teach it – green card, I get it –yellow card) or "clunk" (I don't get it – red card) at the end of each learning unit; "getting the gist" of the most important parts of the learning materials; and "wrapping up" key ideas. Find out how to help students of mixed achievement levels apply learning strategies while learning content area in materials in small groups. The design of CSL is based on social-constructivism and Vygotsky’s best-known concept of the Zone of Proximal Development (ZPD). The definition of ZPD is the distance between the actual acquisition level as decided by individual problem solution case and the level of potential acquisition level, similar to Krashen (1980)’s i+1, as decided through the problem-solution case under the facilitator’s guidance, or in group work with other more capable members (Vygotsky, 1978). Vygotsky claimed that learners’ ideal learning environment is in the ZPD. An ideal teacher or more-knowledgable-other (MKO) should be able to recognize a learner’s ZPD and facilitates them to develop beyond it. Then the MKO is able to leave the support step by step until the learner can perform the task without aid. Steven Krashen (1980) proposed Input hypothesis including i+1 hypothesis. The input hypothesis models are the application of ZPD in second language acquisition and have been widely recognized until today. Krashen (2019)’s optimal language learning environment (2019) further developed the application of ZPD and added the component of strategic group learning. The strategic group learning is composed of desirable learning materials learners are motivated to learn and desirable group members who are more capable and are therefore able to offer meaningful input to the learners. Purpose-driven Collaborative Strategic Learning Model is a strategic integration of ZPD, i+1 hypothesis model, and Optimal Language Learning Environment Model. It is purpose driven to ensure group members are motivated. It is collaborative so that an optimal learning environment where meaningful input from meaningful conversation can be generated. It is strategic because facilitators in the model strategically assign each member a meaningful and collaborative role, e.g., team leader, technician, problem solver, appraiser, offer group learning instrument so that the learning process is structured, and integrate group learning and team building making sure holistic development of each participant. Using data collected from college year one and year two students’ English courses, this presentation will demonstrate how purpose-driven collaborative strategic learning model is implemented in the second/foreign language classroom, using the qualitative data from questionnaire and interview. Particular, this presentation will show how second/foreign language learners grow from functioning with facilitator or more capable peer’s aid to performing without aid. The implication of this research is that purpose-driven collaborative strategic learning model can be used not only in language learning, but also in any subject area.Keywords: collaborative, strategic, optimal input, second language acquisition
Procedia PDF Downloads 1274998 Enhancing Embedded System Efficiency with Digital Signal Processing Cores
Authors: Anil H. Dhanawade, Akshay S., Harshal M. Lakesar
Abstract:
This paper presents a comprehensive analysis of the performance advantages offered by DSP (Digital Signal Processing) cores compared to traditional MCU (Microcontroller Unit) cores in the execution of various functions critical to real-time applications. The focus is on the integration of DSP functionalities, specifically in the context of motor control applications such as Field-Oriented Control (FOC), trigonometric calculations, back-EMF estimation, digital filtering, and high-resolution PWM generation. Through comparative analysis, it is demonstrated that DSP cores significantly enhance processing efficiency, achieving faster execution times for complex mathematical operations essential for precise torque and speed control. The study highlights the capabilities of DSP cores, including single-cycle Multiply-Accumulate (MAC) operations and optimized hardware for trigonometric functions, which collectively reduce latency and improve real-time performance. In contrast, MCU cores, while capable of performing similar tasks, typically exhibit longer execution times due to reliance on software-based solutions and lack of dedicated hardware acceleration. The findings underscore the critical role of DSP cores in applications requiring high-speed processing and low-latency response, making them indispensable in the automotive, industrial, and robotics sectors. This work serves as a reference for future developments in embedded systems, emphasizing the importance of architecture choice in achieving optimal performance in demanding computational tasks.Keywords: CPU core, DSP, assembly code, motor control
Procedia PDF Downloads 184997 Early Diagnosis of Alzheimer's Disease Using a Combination of Images Processing and Brain Signals
Authors: E. Irankhah, M. Zarif, E. Mazrooei Rad, K. Ghandehari
Abstract:
Alzheimer's prevalence is on the rise, and the disease comes with problems like cessation of treatment, high cost of treatment, and the lack of early detection methods. The pathology of this disease causes the formation of protein deposits in the brain of patients called plaque amyloid. Generally, the diagnosis of this disease is done by performing tests such as a cerebrospinal fluid, CT scan, MRI, and spinal cord fluid testing, or mental testing tests and eye tracing tests. In this paper, we tried to use the Medial Temporal Atrophy (MTA) method and the Leave One Out (LOO) cycle to extract the statistical properties of the three Fz, Pz, and Cz channels of ERP signals for early diagnosis of this disease. In the process of CT scan images, the accuracy of the results is 81% for the healthy person and 88% for the severe patient. After the process of ERP signaling, the accuracy of the results for a healthy person in the delta band in the Cz channel is 81% and in the alpha band the Pz channel is 90%. In the results obtained from the signal processing, the results of the severe patient in the delta band of the Cz channel were 89% and in the alpha band Pz channel 92%.Keywords: Alzheimer's disease, image and signal processing, LOO cycle, medial temporal atrophy
Procedia PDF Downloads 1984996 Reading Comprehension in Profound Deaf Readers
Authors: S. Raghibdoust, E. Kamari
Abstract:
Research show that reduced functional hearing has a detrimental influence on the ability of an individual to establish proper phonological representations of words, since the phonological representations are claimed to mediate the conceptual processing of written words. Word processing efficiency is expected to decrease with a decrease in functional hearing. In other words, it is predicted that hearing individuals would be more capable of word processing than individuals with hearing loss, as their functional hearing works normally. Studies also demonstrate that the quality of the functional hearing affects reading comprehension via its effect on their word processing skills. In other words, better hearing facilitates the development of phonological knowledge, and can promote enhanced strategies for the recognition of written words, which in turn positively affect higher-order processes underlying reading comprehension. The aims of this study were to investigate and compare the effect of deafness on the participants’ abilities to process written words at the lexical and sentence levels through using two online and one offline reading comprehension tests. The performance of a group of 8 deaf male students (ages 8-12) was compared with that of a control group of normal hearing male students. All the participants had normal IQ and visual status, and came from an average socioeconomic background. None were diagnosed with a particular learning or motor disability. The language spoken in the homes of all participants was Persian. Two tests of word processing were developed and presented to the participants using OpenSesame software, in order to measure the speed and accuracy of their performance at the two perceptual and conceptual levels. In the third offline test of reading comprehension which comprised of semantically plausible and semantically implausible subject relative clauses, the participants had to select the correct answer out of two choices. The data derived from the statistical analysis using SPSS software indicated that hearing and deaf participants had a similar word processing performance both in terms of speed and accuracy of their responses. The results also showed that there was no significant difference between the performance of the deaf and hearing participants in comprehending semantically plausible sentences (p > 0/05). However, a significant difference between the performances of the two groups was observed with respect to their comprehension of semantically implausible sentences (p < 0/05). In sum, the findings revealed that the seriously impoverished sentence reading ability characterizing the profound deaf subjects of the present research, exhibited their reliance on reading strategies that are based on insufficient or deviant structural knowledge, in particular in processing semantically implausible sentences, rather than a failure to efficiently process written words at the lexical level. This conclusion, of course, does not mean to say that deaf individuals may never experience deficits at the word processing level, deficits that impede their understanding of written texts. However, as stated in previous researches, it sounds reasonable to assume that the more deaf individuals get familiar with written words, the better they can recognize them, despite having a profound phonological weakness.Keywords: deafness, reading comprehension, reading strategy, word processing, subject and object relative sentences
Procedia PDF Downloads 3384995 Bamboo: A Trendy and New Alternative to Wood
Authors: R. T. Aggangan, R. J. Cabangon
Abstract:
Bamboo is getting worldwide attention over the last 20 to 30 years due to numerous uses and it is regarded as the closest material that can be used as substitute to wood. In the domestic market, high quality bamboo products are sold in high-end markets while lower quality products are generally sold to medium and low income consumers. The global market in 2006 stands at about 7 billion US dollars and was projected to increase to US$ 17 B from 2015 to 2020. The Philippines had been actively producing and processing bamboo products for the furniture, handicrafts and construction industry. It was however in 2010 that the Philippine bamboo industry was formalized by virtue of Executive Order 879 that stated that the Philippine bamboo industry development is made a priority program of the government and created the Philippine Bamboo Industry Development Council (PBIDC) to provide the overall policy and program directions of the program for all stakeholders. At present, the most extensive use of bamboo is for the manufacture of engineered bamboo for school desks for all public schools as mandated by EO 879. Also, engineered bamboo products are used for high-end construction and furniture as well as for handicrafts. Development of cheap adhesives, preservatives, and finishing chemicals from local species of plants, development of economical methods of drying and preservation, product development and processing of lesser-used species of bamboo, development of processing tools, equipment and machineries are the strategies that will be employed to reduce the price and mainstream engineered bamboo products in the local and foreign market. In addition, processing wastes from bamboo can be recycled into fuel products such as charcoal are already in use. The more exciting possibility, however, is the production of bamboo pellets that can be used as a substitute for wood pellets for heating, cooking and generating electricity.Keywords: bamboo charcoal and light distillates, engineered bamboo, furniture and handicraft industries, housing and construction, pellets
Procedia PDF Downloads 2484994 Performance Estimation of Two Port Multiple-Input and Multiple-Output Antenna for Wireless Local Area Network Applications
Authors: Radha Tomar, Satish K. Jain, Manish Panchal, P. S. Rathore
Abstract:
In the presented work, inset fed microstrip patch antenna (IFMPA) based two port MIMO Antenna system has been proposed, which is suitable for wireless local area network (WLAN) applications. IFMPA has been designed, optimized for 2.4 GHz and applied for MIMO formation. The optimized parameters of the proposed IFMPA have been used for fabrication of antenna and two port MIMO in a laboratory. Fabrication of the designed MIMO antenna has been done and tested experimentally for performance parameters like Envelope Correlation Coefficient (ECC), Mean Effective Gain (MEG), Directive Gain (DG), Channel Capacity Loss (CCL), Multiplexing Efficiency (ME) etc and results are compared with simulated parameters extracted with simulated S parameters to validate the results. The simulated and experimentally measured plots and numerical values of these MIMO performance parameters resembles very much with each other. This shows the success of MIMO antenna design methodology.Keywords: multiple-input and multiple-output, wireless local area network, vector network analyzer, envelope correlation coefficient
Procedia PDF Downloads 564993 Parametrical Simulation of Sheet Metal Forming Process to Control the Localized Thinning
Authors: Hatem Mrad, Alban Notin, Mohamed Bouazara
Abstract:
Sheet metal forming process has a multiple successive steps starting from sheets fixation to sheets evacuation. Often after forming operation, the sheet has defects requiring additional corrections steps. For example, in the drawing process, the formed sheet may have several defects such as springback, localized thinning and bends. All these defects are directly dependent on process, geometric and material parameters. The prediction and elimination of these defects requires the control of most sensitive parameters. The present study is concerned with a reliable parametric study of deep forming process in order to control the localized thinning. The proposed approach will be based on stochastic finite element method. Especially, the polynomial Chaos development will be used to establish a reliable relationship between input (process, geometric and material parameters) and output variables (sheet thickness). The commercial software Abaqus is used to conduct numerical finite elements simulations. The automatized parametrical modification is provided by coupling a FORTRAN routine, a PYTHON script and input Abaqus files.Keywords: sheet metal forming, reliability, localized thinning, parametric simulation
Procedia PDF Downloads 4234992 Energetic and Exergetic Evaluation of Box-Type Solar Cookers Using Different Insulation Materials
Authors: A. K. Areamu, J. C. Igbeka
Abstract:
The performance of box-type solar cookers has been reported by several researchers but little attention was paid to the effect of the type of insulation material on the energy and exergy efficiency of these cookers. This research aimed at evaluating the energy and exergy efficiencies of the box-type cookers containing different insulation materials. Energy and exergy efficiencies of five box-type solar cookers insulated with maize cob, air (control), maize husk, coconut coir and polyurethane foam respectively were obtained over a period of three years. The cookers were evaluated using water heating test procedures in determining the energy and exergy analysis. The results were subjected to statistical analysis using ANOVA. The result shows that the average energy input for the five solar cookers were: 245.5, 252.2, 248.7, 241.5 and 245.5J respectively while their respective average energy losses were: 201.2, 212.7, 208.4, 189.1 and 199.8J. The average exergy input for five cookers were: 228.2, 234.4, 231.1, 224.4 and 228.2J respectively while their respective average exergy losses were: 223.4, 230.6, 226.9, 218.9 and 223.0J. The energy and exergy efficiency was highest in the cooker with coconut coir (37.35 and 3.90% respectively) in the first year but was lowest for air (11 and 1.07% respectively) in the third year. Statistical analysis showed significant difference between the energy and exergy efficiencies over the years. These results reiterate the importance of a good insulating material for a box-type solar cooker.Keywords: efficiency, energy, exergy, heating insolation
Procedia PDF Downloads 3674991 Evaluation of Condyle Alterations after Orthognathic Surgery with a Digital Image Processing Technique
Authors: Livia Eisler, Cristiane C. B. Alves, Cristina L. F. Ortolani, Kurt Faltin Jr.
Abstract:
Purpose: This paper proposes a technically simple diagnosis method among orthodontists and maxillofacial surgeons in order to evaluate discrete bone alterations. The methodology consists of a protocol to optimize the diagnosis and minimize the possibility for orthodontic and ortho-surgical retreatment. Materials and Methods: A protocol of image processing and analysis, through ImageJ software and its plugins, was applied to 20 pairs of lateral cephalometric images obtained from cone beam computerized tomographies, before and 1 year after undergoing orthognathic surgery. The optical density of the images was analyzed in the condylar region to determine possible bone alteration after surgical correction. Results: Image density was shown to be altered in all image pairs, especially regarding the condyle contours. According to measures, condyle had a gender-related density reduction for p=0.05 and condylar contours had their alterations registered in mm. Conclusion: A simple, viable and cost-effective technique can be applied to achieve the more detailed image-based diagnosis, not depending on the human eye and therefore, offering more reliable, quantitative results.Keywords: bone resorption, computer-assisted image processing, orthodontics, orthognathic surgery
Procedia PDF Downloads 1604990 The Structural Pattern: An Event-Related Potential Study on Tang Poetry
Authors: ShuHui Yang, ChingChing Lu
Abstract:
Measuring event-related potentials (ERPs) has been fundamental to our understanding of how people process language. One specific ERP component, a P600, has been hypothesized to be associated with syntactic reanalysis processes. We, however, propose that the P600 is not restricted to reanalysis processes, but is the index of the structural pattern processing. To investigate the structural pattern processing, we utilized the effects of stimulus degradation in structural priming. To put it another way, there was no P600 effect if the structure of the prime was the same with the structure of the target. Otherwise, there would be a P600 effect if the structure were different between the prime and the target. In the experiment, twenty-two participants were presented with four sentences of Tang poetry. All of the first two sentences, being prime, were conducted with SVO+VP. The last two sentences, being the target, were divided into three types. Type one of the targets was SVO+VP. Type two of the targets was SVO+VPVP. Type three of the targets was VP+VP. The result showed that both of the targets, SVO+VPVP and VP+VP, elicited positive-going brainwave, a P600 effect, at 600~900ms time window. Furthermore, the P600 component was lager for the target’ VP+VP’ than the target’ SVO+VPVP’. That meant the more dissimilar the structure was, the lager the P600 effect we got. These results indicate that P600 was the index of the structure processing, and it would affect the P600 effect intensity with the degrees of structural heterogeneity.Keywords: ERPs, P600, structural pattern, structural priming, Tang poetry
Procedia PDF Downloads 1404989 The Development and Future of Hong Kong Typography
Authors: Amic G. Ho
Abstract:
Language usage and typography in Hong Kong are unique, as can be seen clearly on the streets of the city. In contrast to many other parts of the world, where there is only one language, in Hong Kong many signs and billboards display two languages: Chinese and English. The language usage on signage, fonts and types used, and the designs in magazines and advertisements all demonstrate the unique features of Hong Kong typographic design, which reflect the multicultural nature of Hong Kong society. This study is the first step in investigating the nature and development of Hong Kong typography. The preliminary research explored how the historical development of Hong Kong is reflected in its unique typography. Following a review of historical development, a quantitative study was designed: Local Hong Kong participants were invited to provide input on what makes the Hong Kong typographic style unique. Their input was collected and analyzed. This provided us with information about the characteristic criteria and features of Hong Kong typography, as recognized by the local people. The most significant typographic designs in Hong Kong were then investigated and the influence of Chinese and other cultures on Hong Kong typography was assessed. The research results provide an indication to local designers on how they can strengthen local design outcomes and promote the values and culture of their mother town.Keywords: typography, Hong Kong, historical developments, multiple cultures
Procedia PDF Downloads 5154988 Effect of Crystallographic Characteristics on Toughness of Coarse Grain Heat Affected Zone for Different Heat Inputs
Authors: Trishita Ray, Ashok Perka, Arnab Karani, M. Shome, Saurabh Kundu
Abstract:
Line pipe steels are used for long distance transportation of crude oil and gas under extreme environmental conditions. Welding is necessary to lay large scale pipelines. Coarse Grain Heat Affected Zone (CGHAZ) of a welded joint exhibits worst toughness because of excessive grain growth and brittle microstructures like bainite and martensite, leading to early failure. Therefore, it is necessary to investigate microstructures and properties of the CGHAZ for different welding heat inputs. In the present study, CGHAZ for two heat inputs of 10 kJ/cm and 50 kJ/cm were simulated in Gleeble 3800, and the microstructures were investigated in detail by means of Scanning Electron Microscopy (SEM) and Electron Backscattered Diffraction (EBSD). Charpy Impact Tests were also done to evaluate the impact properties. High heat input was characterized with very low toughness and massive prior austenite grains. With the crystallographic information from EBSD, the area of a single prior austenite grain was traced out for both the welding conditions. Analysis of the prior austenite grains showed the formation of high angle boundaries between the crystallographic packets. Effect of these packet boundaries on secondary cleavage crack propagation was discussed. It was observed that in the low heat input condition, formation of finer packets with a criss-cross morphology inside prior austenite grains was effective in crack arrest whereas, in the high heat input condition, formation of larger packets with higher volume of low angle boundaries failed to resist crack propagation resulting in a brittle fracture. Thus, the characteristics in a crystallographic packet and impact properties are related and should be controlled to obtain optimum properties.Keywords: coarse grain heat affected zone, crystallographic packet, toughness, line pipe steel
Procedia PDF Downloads 2454987 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English
Authors: Duong Thuy Nguyen, Giulia Bencini
Abstract:
The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing
Procedia PDF Downloads 1524986 Financial Information and Collective Bargaining: Conflicting or Complementing
Authors: Humayun Murshed, Shibly Abdullah
Abstract:
The research conducted in early seventies apparently assumed the existence of a universal decision model for union negotiators and furthermore tended to regard financial information as a ‘neutral’ input into a rational decision-making process. However, research in the eighties began to question the neutrality of financial information as an input in collective bargaining rather viewing it as a potentially effective means for controlling the labour force. Furthermore, this later research also started challenging the simplistic assumptions relating particularly to union objectives which have underpinned the earlier search for universal union decision models. Despite the above developments there seems to be a dearth of studies in developing countries concerning the use of financial information in collective bargaining. This paper seeks to begin to remedy this deficiency. Utilising a case study approach based on two enterprises, one in the public sector and the other a multinational, the universal decision model is rejected and it is argued that the decision whether or not to use financial information is a contingent one and such a contingency is largely defined by the context and environment in which both union and management negotiators work. An attempt is also made to identify the factors constraining as well as promoting the use of financial information in collective bargaining, these being regarded as unique to the organizations within which the case studies are conducted.Keywords: collective bargaining, developing countries, disclosures, financial information
Procedia PDF Downloads 4714985 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance
Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian
Abstract:
Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.Keywords: identification, Hammerstein-Wiener, optimization, quantization
Procedia PDF Downloads 2574984 Neural Correlates of Arabic Digits Naming
Authors: Fernando Ojedo, Alejandro Alvarez, Pedro Macizo
Abstract:
In the present study, we explored electrophysiological correlates of Arabic digits naming to determine semantic processing of numbers. Participants named Arabic digits grouped by category or intermixed with exemplars of other semantic categories while the N400 event-related potential was examined. Around 350-450 ms after the presentation of Arabic digits, brain waves were more positive in anterior regions and more negative in posterior regions when stimuli were grouped by category relative to the mixed condition. Contrary to what was found in other studies, electrophysiological results suggested that the production of numerals involved semantic mediation.Keywords: Arabic digit naming, event-related potentials, semantic processing, number production
Procedia PDF Downloads 5824983 The Initiation of Privatization, Market Structure, and Free Entry with Vertically Related Markets
Authors: Hung-Yi Chen, Shih-Jye Wu
Abstract:
The existing literature provides little discussion on why a public monopolist gives up its market dominant position and allows private firms entering the market. We argue that the privatization of a public monopolist under a vertically related market may induce the entry of private firms. We develop a model of a mixed oligopoly with vertically related markets to explain the change in the market from a public monopolist to a mixed oligopoly and examine issues on privatizing the downstream public enterprise both in the short run and long run in the vertically related markets. We first show that the welfare-maximizing public monopoly firm is suboptimal in the vertically related markets. This is due to the fact that the privatization will reduce the input price charged by the upstream foreign monopolist. Further, the privatization will induce the entry of private firms since input price will decrease after privatization. Third, we demonstrate that the complete privatizing the public firm becomes a possible solution if the entry cost of private firm is low. Finally, we indicate that the public firm should partially privatize if the free-entry of private firms is allowed. JEL classification: F12, F14, L32, L33Keywords: free entry, mixed oligopoly, public monopoly, the initiation of privatization, vertically related markets, mixed oligopoly
Procedia PDF Downloads 1374982 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform
Authors: Ashagrie Getnet Flattie
Abstract:
Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.Keywords: LTE, MIMO, path loss, UAV
Procedia PDF Downloads 2794981 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb
Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro
Abstract:
Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.Keywords: meat production, natural additives, profitability, sheep
Procedia PDF Downloads 1394980 Elevated Temperature Shot Peening for M50 Steel
Authors: Xinxin Ma, Guangze Tang, Shuxin Yang, Jinguang He, Fan Zhang, Peiling Sun, Ming Liu, Minyu Sun, Liqin Wang
Abstract:
As a traditional surface hardening technique, shot peening is widely used in industry. By using shot peening, a residual compressive stress is formed in the surface which is beneficial for improving the fatigue life of metal materials. At the same time, very fine grains and high density defects are generated in the surface layer which enhances the surface hardness, either. However, most of the processes are carried out at room temperature. For high strength steel, such as M50, the thickness of the strengthen layer is limited. In order to obtain a thick strengthen surface layer, elevated temperature shot peening was carried out in this work by using Φ1mm cast ion balls with a speed of 80m/s. Considering the tempering temperature of M50 steel is about 550 oC, the processing temperature was in the range from 300 to 500 oC. The effect of processing temperature and processing time of shot peening on distribution of residual stress and surface hardness was investigated. As we known, the working temperature of M50 steel can be as high as 315 oC. Because the defects formed by shot peening are unstable when the working temperature goes higher, it is worthy to understand what happens during the shot peening process, and what happens when the strengthen samples were kept at a certain temperature. In our work, the shot peening time was selected from 2 to 10 min. And after the strengthening process, the samples were annealed at various temperatures from 200 to 500 oC up to 60 h. The results show that the maximum residual compressive stress is near 900 MPa. Compared with room temperature shot peening, the strengthening depth of 500 oC shot peening sample is about 2 times deep. The surface hardness increased with the processing temperature, and the saturation peening time decreases. After annealing, the residual compressive stress decreases, however, for 500 oC peening sample, even annealing at 500 oC for 20 h, the residual compressive stress is still over 600 MPa. However, it is clean to see from SEM that the grain size of surface layers is still very small.Keywords: shot peening, M50 steel, residual compressive stress, elevated temperature
Procedia PDF Downloads 4564979 Comparing Image Processing and AI Techniques for Disease Detection in Plants
Authors: Luiz Daniel Garay Trindade, Antonio De Freitas Valle Neto, Fabio Paulo Basso, Elder De Macedo Rodrigues, Maicon Bernardino, Daniel Welfer, Daniel Muller
Abstract:
Agriculture plays an important role in society since it is one of the main sources of food in the world. To help the production and yield of crops, precision agriculture makes use of technologies aiming at improving productivity and quality of agricultural commodities. One of the problems hampering quality of agricultural production is the disease affecting crops. Failure in detecting diseases in a short period of time can result in small or big damages to production, causing financial losses to farmers. In order to provide a map of the contributions destined to the early detection of plant diseases and a comparison of the accuracy of the selected studies, a systematic literature review of the literature was performed, showing techniques for digital image processing and neural networks. We found 35 interesting tool support alternatives to detect disease in 19 plants. Our comparison of these studies resulted in an overall average accuracy of 87.45%, with two studies very closer to obtain 100%.Keywords: pattern recognition, image processing, deep learning, precision agriculture, smart farming, agricultural automation
Procedia PDF Downloads 3794978 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 1204977 Using Autoencoder as Feature Extractor for Malware Detection
Authors: Umm-E-Hani, Faiza Babar, Hanif Durad
Abstract:
Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.Keywords: malware, auto encoders, automated feature engineering, classification
Procedia PDF Downloads 72