Search results for: technique.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3065

Search results for: technique.

95 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber

Authors: G. Zuppardi, F. Romano

Abstract:

Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.

Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
94 The Effect of Discontinued Water Spray Cooling on the Heat Transfer Coefficient

Authors: J. Hrabovský, M. Chabičovský, J. Horský

Abstract:

Water spray cooling is a technique typically used in heat treatment and other metallurgical processes where controlled temperature regimes are required. Water spray cooling is used in static (without movement) or dynamic (with movement of the steel plate) regimes. The static regime is notable for the fixed position of the hot steel plate and fixed spray nozzle. This regime is typical for quenching systems focused on heat treatment of the steel plate. The second application of spray cooling is the dynamic regime. The dynamic regime is notable for its static section cooling system and moving steel plate. This regime is used in rolling and finishing mills. The fixed position of cooling sections with nozzles and the movement of the steel plate produce nonhomogeneous water distribution on the steel plate. The length of cooling sections and placement of water nozzles in combination with the nonhomogeneity of water distribution lead to discontinued or interrupted cooling conditions. The impact of static and dynamic regimes on cooling intensity and the heat transfer coefficient during the cooling process of steel plates is an important issue. Heat treatment of steel is accompanied by oxide scale growth. The oxide scale layers can significantly modify the cooling properties and intensity during the cooling. The combination of static and dynamic (section) regimes with the variable thickness of the oxide scale layer on the steel surface impact the final cooling intensity. The study of the influence of the oxide scale layers with different cooling regimes was carried out using experimental measurements and numerical analysis. The experimental measurements compared both types of cooling regimes and the cooling of scale-free surfaces and oxidized surfaces. A numerical analysis was prepared to simulate the cooling process with different conditions of the section and samples with different oxide scale layers.

Keywords: Heat transfer coefficient, numerical analysis, oxide layer, spray cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2977
93 Resting-State Functional Connectivity Analysis Using an Independent Component Approach

Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi

Abstract:

Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.

Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 290
92 A Novel Multiplex Real-Time PCR Assay Using TaqMan MGB Probes for Rapid Detection of Trisomy 21

Authors: Mehrdad Hashemi, Mitra Behrooz Aghdam, Reza Mahdian, Ahmad Reza Kamyab

Abstract:

Cytogenetic analysis still remains the gold standard method for prenatal diagnosis of trisomy 21 (Down syndrome, DS). Nevertheless, the conventional cytogenetic analysis needs live cultured cells and is too time-consuming for clinical application. In contrast, molecular methods such as FISH, QF-PCR, MLPA and quantitative Real-time PCR are rapid assays with results available in 24h. In the present study, we have successfully used a novel MGB TaqMan probe-based real time PCR assay for rapid diagnosis of trisomy 21 status in Down syndrome samples. We have also compared the results of this molecular method with corresponding results obtained by the cytogenetic analysis. Blood samples obtained from DS patients (n=25) and normal controls (n=20) were tested by quantitative Real-time PCR in parallel to standard G-banding analysis. Genomic DNA was extracted from peripheral blood lymphocytes. A high precision TaqMan probe quantitative Real-time PCR assay was developed to determine the gene dosage of DSCAM (target gene on 21q22.2) relative to PMP22 (reference gene on 17p11.2). The DSCAM/PMP22 ratio was calculated according to the formula; ratio=2 -ΔΔCT. The quantitative Real-time PCR was able to distinguish between trisomy 21 samples and normal controls with the gene ratios of 1.49±0.13 and 1.03±0.04 respectively (p value <0.001). These results represent the presence of 3 copies of target gene in DS samples Vs 2 copies in normal controls. The results of quantitative Real-time PCR were in complete agreement with results of cytogenetic analysis. This study confirms previous reports regarding successful implementation of quantitative Real-time PCR for detection of trisomy 21. However, the assay has been improved by using MGB probes and more accurate data analysis. This assay, in particular, when performed in combination with another molecular assay such as QF-PCR or MLPA, can be used as a reliable technique for rapid prenatal diagnosis of trisomy 21.

Keywords: Trisomy 21, Real-time PCR, MGB-TaqMan Probes, Gene Dosage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536
91 Influence of Online Sports Events on Betting among Nigerian Youth

Authors: B. O. Diyaolu

Abstract:

The opportunity provided by advances in technology as regard to sports betting is so numerous. Nigerian youth are not left out especially with the use of phones and visit to sports betting outlets. Today, it is more difficult to differentiate a true fan as there are quite a number of them that became fans as a result of betting on live games. This study investigated the influence of online sports events on betting among Nigerian youth. A descriptive survey research design was used and the population consists of all Nigerian youth that engages in betting and lives within the southwest zone of Nigeria. A simple random sampling technique was used to pick three states from the southwest zone of Nigeria. 2500 respondents comprising of males and females were sampled from the three states. A structured questionnaire on Online Sports Event Contribution to Sports Betting (OSECSB) was used. The instrument consists of three sections. Section A seeks information on the demographic data of the respondents. Section B seeks information on online sports events while section C was used to extract information on sports betting. The modified instrument which consists of 14 items has a reliability coefficient of 0.74. The hypothesis was tested at 0.05 significance level. The completed questionnaire was collated, coded, and analyzed using descriptive statistics of frequency counts, percentage and pie chart, and inferential statistics of multiple regressions. The findings of this study revealed that online sports betting is a significant predictor of an increase in sports betting among Nigerian youth. The media and television, as well as globalization and the internet, coupled with social media and various online platforms, have all contributed to the immense increase in sports betting. The increase in the advertisement of the betting platform during live matches, especially football, is becoming more alarming. In most organized international events, the media attention, as well as sponsorship rights, are now been given to one or two betting platforms. There is a need for all stakeholders to put in place school-based intervention programs to reorientate our youth about the consequences of addiction to betting. Such programs must include meta-analyses and emotional control towards sports betting.

Keywords: Betting platform, Nigerian fans, Nigerian youth, sports betting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 443
90 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: Artificial Neural Network, Decision Support System, drug abuse, drug addiction, Multilayer Perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
89 Information Tree - Establishment of Lifestyle-Based IT Visual Model

Authors: Chiung-Hui Chen

Abstract:

Traditional service channel is losing its edge due to emerging service technology. To establish interaction with the clients, the service industry is using effective mechanism to give clients direct access to services with emerging technologies. Thus, as service science receives attention, special and unique consumption pattern evolves; henceforth, leading to new market mechanism and influencing attitudes toward life and consumption patterns. The market demand for customized services is thus valued due to the emphasis of personal value, and is gradually changing the demand and supply relationship in the traditional industry. In respect of interior design service, in the process of traditional interior design, a designer converts to a concrete form the concept generated from the ideas and needs dictated by a user (client), by using his/her professional knowledge and drawing tool. The final product is generated through iterations of communication and modification, which is a very time-consuming process. Although this process has been accelerated with the help of computer graphics software today, repeated discussions and confirmations with users are still required to complete the task. In consideration of what is addressed above a space user’s life model is analyzed with visualization technique to create an interaction system modeled after interior design knowledge. The space user document intuitively personal life experience in a model requirement chart, allowing a researcher to analyze interrelation between analysis documents, identify the logic and the substance of data conversion. The repeated data which is documented are then transformed into design information for reuse and sharing. A professional interior designer may sort out the correlation among user’s preference, life pattern and design specification, thus deciding the critical design elements in the process of service design.

Keywords: Information Design, Life Model-Based, Aesthetic Computing, Communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
88 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data

Authors: Saeid Gharechelou, Ryutaro Tateishi

Abstract:

Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.

Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid monitoring, 2015-Nepal earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
87 Multidimensional Performance Tracking

Authors: C. Ardil

Abstract:

In this study, a model, together with a software tool that implements it, has been developed to determine the performance ratings of employees in an organization operating in the information technology sector using the indicators obtained from employees' online study data. Weighted Sum (WS) Method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method based on multidimensional decision making approach were used in the study. WS and TOPSIS methods provide multidimensional decision making (MDDM) methods that allow all dimensions to be evaluated together considering specific weights, allowing employees to objectively evaluate the problem of online performance tracking. The application of WS and TOPSIS mathematical methods, which can combine alternatives with a large number of dimensions and reach simultaneous solution, has been implemented through an online performance tracking software. In the application of WS and TOPSIS methods, objective dimension weights were calculated by using entropy information (EI) and standard deviation (SD) methods from the data obtained by employees' online performance tracking method, decision matrix was formed by using performance scores for each employee, and a single performance score was calculated for each employee. Based on the calculated performance score, employees were given a performance evaluation decision. The results of Pareto set evidence and comparative mathematical analysis validate that employees' performance preference rankings in WS and TOPSIS methods are closely related. This suggests the compatibility, applicability, and validity of the proposed method to the MDDM problems in which a large number of alternative and dimension types are taken into account. With this study, an objective, realistic, feasible and understandable mathematical method, together with a software tool that implements it has been demonstrated. This is considered to be preferable because of the subjectivity, limitations and high cost of the methods traditionally used in the measurement and performance appraisal in the information technology sector.

Keywords: Weighted sum, entropy ınformation, standard deviation, online performance tracking, performance evaluation, performance management, multidimensional decision making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
86 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: Channel estimation, inter-cell interference, pilot contamination attacks, wireless communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
85 Efficacy of Gamma Radiation on the Productivity of Bactrocera oleae Gmelin (Diptera: Tephritidae)

Authors: Mehrdad Ahmadi, Mohamad Babaie, Shiva Osouli, Bahareh Salehi, Nadia Kalantaraian

Abstract:

The olive fruit fly, Bactrocera oleae Gmelin (Diptera: Tephritidae), is one of the most serious pests in olive orchards in growing province in Iran. The female lay eggs in green olive fruit and larvae hatch inside the fruit, where they feed upon the fruit matters. One of the main ecologically friendly and species-specific systems of pest control is the sterile insect technique (SIT) which is based on the release of large numbers of sterilized insects. The objective of our work was to develop a SIT against B. oleae by using of gamma radiation for the laboratory and field trial in Iran. Oviposition of female mated by irradiated males is one of the main parameters to determine achievement of SIT. To conclude the sterile dose, pupae were placed under 0 to 160 Gy of gamma radiation. The main factor in SIT is the productivity of females which are mated by irradiated males. The emerged adults from irradiated pupae were mated with untreated adults of the same age by confining them inside the transparent cages. The fecundity of the irradiated males mated with non-irradiated females was decreased with the increasing radiation dose level. It was observed that the number of eggs and also the percentage of the egg hatching was significantly (P < 0.05) affected in either IM x NF crosses compared with NM x NF crosses in F1 generation at all doses. Also, the statistical analysis showed a significant difference (P < 0.05) in the mean number of eggs laid between irradiated and non-irradiated females crossed with irradiated males, which suggests that the males were susceptible to gamma radiation. The egg hatching percentage declined markedly with the increase of the radiation dose of the treated males in mating trials which demonstrated that egg hatch rate was dose dependent. Our results specified that gamma radiation affects the longevity of irradiated B. oleae larvae (established from irradiated pupae) and significantly increased their larval duration. Results show the gamma radiation, and SIT can be used successfully against olive fruit flies.

Keywords: Fertility, olive fruit fly, radiation, SIT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1102
84 Conservation Techniques for Soil Erosion Control in Tobacco-Based Farming System at Steep Land Areas of Progo Hulu Subwatershed, Central Java, Indonesia

Authors: Jaka Suyana, Komariah, Masateru Senge

Abstract:

This research was aimed at determining the impact of conservation techniques including bench terrace, stone terrace, mulching, grass strip and intercropping on soil erosion at tobacco-based farming system at Progo Hulu subwatershed, Central Java, Indonesia. Research was conducted from September 2007 to September 2009, located at Progo Hulu subwatershed, Central Java, Indonesia. Research site divided into 27 land units, and experimental fields were grouped based on the soil type and slope, ie: 30%, 45% and 70%, with the following treatments: 1) ST0= stone terrace (control); 2) ST1= stone terrace + Setaria spacelata grass strip on a 5 cm height dike at terrace lips + tobacco stem mulch with dose of 50% (7 ton/ ha); 3) ST2= stone terrace + Setaria spacelata grass strip on a 5 cm height dike at terrace lips + tobacco stem mulch with dose of 100% (14 ton/ ha); 4) ST3= stone terrace + tobacco and red bean intercropping + tobacco stem mulch with dose of 50% (7 ton/ ha). 5) BT0= bench terrace (control); 6) BT1= bench terrace + Setaria spacelata grass strip at terrace lips + tobacco stem mulch with dose of 50% (7 ton/ ha); 7) BT2= bench terrace + Setaria spacelata grass strip at terrace lips + tobacco stem mulch with dose of 100% (14 ton/ ha); 8) BT3= bench terrace + tobacco and red bean intercropping + tobacco stem mulch with dose of 50% (7 ton/ ha). The results showed that the actual erosion rates of research site were higher than that of tolerance erosion with mean value 89.08 ton/ha/year and 33.40 ton/ha/year, respectively. These resulted in 69% of total research site (5,119.15 ha) highly degraded. Conservation technique of ST2 was the most effective in suppressing soil erosion, by 42.87%, following with BT2 as much 30.63%. Others suppressed erosion only less than 21%.

Keywords: Steep land, subwatershed, conservation terrace, tolerance erosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2167
83 Effect of Urea Deep Placement Technology Adoption on the Production Frontier: Evidence from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, William Adzawla

Abstract:

Rice is an important staple crop, with current demand higher than the domestic supply in Ghana. This has led to a high and unfavourable import bill. Therefore, recent policies and interventions in the agricultural sub-sector aim at promoting various improved agricultural technologies in order to improve domestic production and reduce the importation of rice. In this study, we examined the effect of the adoption of Urea Deep Placement (UDP) technology by rice farmers on the position of the production frontier. This involved 200 farmers selected through a multi stage sampling technique in the Northern region of Ghana. A Cobb-Douglas stochastic frontier model was fitted. The result showed that the adoption of UDP technology shifts the output frontier outward and also move the farmers closer to the frontier. Farmers were also operating under diminishing returns to scale which calls for redress. Other factors that significantly influenced rice production were farm size, labour, use of certified seeds and NPK fertilizer. Although there was an opportunity for improvement, the farmers were highly efficient (92%), compared to previous studies. Farmers’ efficiency was improved through increased education, household size, experience, access to credit, and lack of extension service provision by MoFA. The study recommends the revision of Ghana’s agricultural policy to include the UDP technology. Agricultural Extension officers of the Ministry of Food and Agriculture (MoFA) should be trained on the UDP technology to support IFDC’s drive to improve adoption by rice farmers. Rice farmers are also encouraged to expand their farm lands, improve plant population, and also increase the usage of fertilizer to improve yields. Mechanisms through which credit can be made easily accessible and effectively utilised should be identified and promoted.

Keywords: Efficiency, rice farmers, stochastic frontier, UDP technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
82 A Simple Chemical Precipitation Method of Titanium Dioxide Nanoparticles Using Polyvinyl Pyrrolidone as a Capping Agent and Their Characterization

Authors: V. P. Muhamed Shajudheen, K. Viswanathan, K. Anitha Rani, A. Uma Maheswari, S. Saravana Kumar

Abstract:

In this paper, a simple chemical precipitation route for the preparation of titanium dioxide nanoparticles, synthesized by using titanium tetra isopropoxide as a precursor and polyvinyl pyrrolidone (PVP) as a capping agent, is reported. The Differential Scanning Calorimetry (DSC) and Thermo Gravimetric Analysis (TGA) of the samples were recorded and the phase transformation temperature of titanium hydroxide, Ti(OH)4 to titanium oxide, TiO2 was investigated. The as-prepared Ti(OH)4 precipitate was annealed at 800°C to obtain TiO2 nanoparticles. The thermal, structural, morphological and textural characterizations of the TiO2 nanoparticle samples were carried out by different techniques such as DSC-TGA, X-Ray Diffraction (XRD), Fourier Transform Infra-Red spectroscopy (FTIR), Micro Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence spectroscopy (PL) and Field Effect Scanning Electron Microscopy (FESEM) techniques. The as-prepared precipitate was characterized using DSC-TGA and confirmed the mass loss of around 30%. XRD results exhibited no diffraction peaks attributable to anatase phase, for the reaction products, after the solvent removal. The results indicate that the product is purely rutile. The vibrational frequencies of two main absorption bands of prepared samples are discussed from the results of the FTIR analysis. The formation of nanosphere of diameter of the order of 10 nm, has been confirmed by FESEM. The optical band gap was found by using UV-Visible spectrum. From photoluminescence spectra, a strong emission was observed. The obtained results suggest that this method provides a simple, efficient and versatile technique for preparing TiO2 nanoparticles and it has the potential to be applied to other systems for photocatalytic activity.

Keywords: TiO2 nanoparticles, chemical precipitation route, phase transition, Fourier Transform Infra-Red spectroscopy, micro Raman spectroscopy, UV-Visible absorption spectroscopy, Photoluminescence spectroscopy, Field Effect Scanning Electron Microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4274
81 High-Frequency Monitoring Results of a Piled Raft Foundation under Wind Loading

Authors: Laurent Pitteloud, Jörg Meier

Abstract:

Piled raft foundations represent an efficient and reliable technique for transferring high vertical and horizontal loads to the subsoil. Piled raft foundations were success­fully implemented for several high-rise buildings world­wide over the last decades. For the structural design of this foundation type the stiffnesses of both the piles and the raft have to be deter­mined for the static (e.g. dead load, live load) and the dynamic load cases (e.g. earthquake). In this context the question often arises, to which proportion wind loads are to be considered as dynamic loads. Usually a piled raft foundation has to be monitored in order to verify the design hypotheses. As an additional benefit, the analysis of this monitoring data may lead to a better under­standing of the behaviour of this foundation type for future projects in similar subsoil conditions. In case the measurement frequency is high enough, one may also draw conclusions on the effect of wind loading on the piled raft foundation. For a 41-storey office building in Basel, Switzerland, the preliminary design showed that a piled raft foundation was the best solution to satisfy both design requirements, as well as economic aspects. A high-frequency monitoring of the foundation including pile loads, vertical stresses under the raft, as well as pore water pressures was performed over 5 years. In windy situations the analysis of the measure­ments shows that the pile load increment due to wind consists of a static and a cyclic load term. As piles and raft react with different stiffnesses under static and dynamic loading, these measure­ments are useful for the correct definition of stiffnesses of future piled raft foundations. This paper outlines the design strategy and the numerical modelling of the aforementioned piled raft foundation. The measurement results are presented and analysed. Based on the findings, comments and conclusions on the definition of pile and raft stiffnesses for vertical and wind loading are proposed.

Keywords: Dynamic loading, high-frequency monitoring, piled raft foundations, wind loading.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800
80 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 380
79 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 793
78 Enhanced Disk-Based Databases Towards Improved Hybrid In-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable inmemory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of diskbased database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of inmemory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: Concurrency control, disk-based databases, inmemory systems, enhanced memory access (EMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
77 A Review of Pharmacological Prevention of Peri-and Post-Procedural Myocardial Injury after Percutaneous Coronary Intervention

Authors: Syed Dawood Md. Taimur, Md. Hasanur Rahman, Syeda Fahmida Afrin, Farzana Islam

Abstract:

The concept of myocardial injury, although first recognized from animal studies, is now recognized as a clinical phenomenon that may result in microvascular damage, no-reflow phenomenon, myocardial stunning, myocardial hibernation and ischemic preconditioning. The final consequence of this event is left ventricular (LV) systolic dysfunction leading to increased morbidity and mortality. The typical clinical case of reperfusion injury occurs in acute myocardial infarction (MI) with ST segment elevation in which an occlusion of a major epicardial coronary artery is followed by recanalization of the artery. This may occur spontaneously or by means of thrombolysis and/or by primary percutaneous coronary intervention (PCI) with efficient platelet inhibition by aspirin (acetylsalicylic acid), clopidogrel and glycoprotein IIb/IIIa inhibitors. In recent years, percutaneous coronary intervention (PCI) has become a well-established technique for the treatment of coronary artery disease. PCI improves symptoms in patients with coronary artery disease and it has been increasing safety of procedures. However, peri- and post-procedural myocardial injury, including angiographical slow coronary flow, microvascular embolization, and elevated levels of cardiac enzyme, such as creatine kinase and troponin-T and -I, has also been reported even in elective cases. Furthermore, myocardial reperfusion injury at the beginning of myocardial reperfusion, which causes tissue damage and cardiac dysfunction, may occur in cases of acute coronary syndrome. Because patients with myocardial injury is related to larger myocardial infarction and have a worse long-term prognosis than those without myocardial injury, it is important to prevent myocardial injury during and/or after PCI in patients with coronary artery disease. To date, many studies have demonstrated that adjunctive pharmacological treatment suppresses myocardial injury and increases coronary blood flow during PCI procedures. In this review, we highlight the usefulness of pharmacological treatment in combination with PCI in attenuating myocardial injury in patients with coronary artery disease.

Keywords: Coronary artery disease, Percutaneous coronary intervention, Myocardial injury, Pharmacology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328
76 Port Positions on the Mixing Efficiency of a Rotor-Type Mixer – A Numerical Study

Authors: Y. C. Liou, J. M. Miao, T. L. Liu, M. H. Ho

Abstract:

The purpose of this study was to explore the complex flow structure a novel active-type micromixer that based on concept of Wankle-type rotor. The characteristics of this micromixer are two folds; a rapid mixing of reagents in a limited space due to the generation of multiple vortices and a graduate increment in dynamic pressure as the mixed reagents is delivered to the output ports. Present micro-mixer is consisted of a rotor with shape of triangle column, a blending chamber and several inlet and outlet ports. The geometry of blending chamber is designed to make the rotor can be freely internal rotated with a constant eccentricity ratio. When the shape of the blending chamber and the rotor are fixed, the effects of rotating speed of rotor and the relative locations of ports on the mixing efficiency are numerical studied. The governing equations are unsteady, two-dimensional incompressible Navier-Stokes equation and the working fluid is the water. The species concentration equation is also solved to reveal the mass transfer process of reagents in various regions then to evaluate the mixing efficiency. The dynamic mesh technique was implemented to model the dynamic volume shrinkage and expansion of three individual sub-regions of blending chamber when the rotor conducted a complete rotating cycle. Six types of ports configuration on the mixing efficiency are considered in a range of Reynolds number from 10 to 300. The rapid mixing process was accomplished with the multiple vortex structures within a tiny space due to the equilibrium of shear force, viscous force and inertial force. Results showed that the highest mixing efficiency could be attained in the following conditions: two inlet and two outlet ports configuration, that is an included angle of 60 degrees between two inlets and an included angle of 120 degrees between inlet and outlet ports when Re=10.

Keywords: active micro-mixer, CFD, mixing efficiency, ports configuration, Reynolds number, Wankle-type rotor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
75 Enzyme Involvement in the Biosynthesis of Selenium Nanoparticles by Geobacillus wiegelii Strain GWE1 Isolated from a Drying Oven

Authors: Daniela N. Correa-Llantén, Sebastián A. Muñoz-Ibacache, Mathilde Maire, Jenny M. Blamey

Abstract:

The biosynthesis of nanoparticles by microorganisms, on the contrary to chemical synthesis, is an environmentally-friendly process which has low energy requirements. In this investigation, we used the microorganism Geobacillus wiegelii, strain GWE1, an aerobic thermophile belonging to genus Geobacillus, isolated from a drying oven. This microorganism has the ability to reduce selenite evidenced by the change of color from colorless to red in the culture. Elemental analysis and composition of the particles were verified using transmission electron microscopy and energy-dispersive X-ray analysis. The nanoparticles have a defined spherical shape and a selenium elemental state. Previous experiments showed that the presence of the whole microorganism for the reduction of selenite was not necessary. The results strongly suggested that an intracellular NADPH/NADH-dependent reductase mediates selenium nanoparticles synthesis under aerobic conditions. The enzyme was purified and identified by mass spectroscopy MALDI-TOF TOF technique. The enzyme is a 1-pyrroline-5-carboxylate dehydrogenase. Histograms of nanoparticles sizes were obtained. Size distribution ranged from 40-160 nm, where 70% of nanoparticles have less than 100 nm in size. Spectroscopic analysis showed that the nanoparticles are composed of elemental selenium. To analyse the effect of pH in size and morphology of nanoparticles, the synthesis of them was carried out at different pHs (4.0, 5.0, 6.0, 7.0, 8.0). For thermostability studies samples were incubated at different temperatures (60, 80 and 100 ºC) for 1 h and 3 h. The size of all nanoparticles was less than 100 nm at pH 4.0; over 50% of nanoparticles have less than 100 nm at pH 5.0; at pH 6.0 and 8.0 over 90% of nanoparticles have less than 100 nm in size. At neutral pH (7.0) nanoparticles reach a size around 120 nm and only 20% of them were less than 100 nm. When looking at temperature effect, nanoparticles did not show a significant difference in size when they were incubated between 0 and 3 h at 60 ºC. Meanwhile at 80 °C the nanoparticles suspension lost its homogeneity. A change in size was observed from 0 h of incubation at 80ºC, observing a size range between 40-160 nm, with 20% of them over 100 nm. Meanwhile after 3 h of incubation at size range changed to 60-180 nm with 50% of them over 100 nm. At 100 °C the nanoparticles aggregate forming nanorod structures. In conclusion, these results indicate that is possible to modulate size and shape of biologically synthesized nanoparticles by modulating pH and temperature.

Keywords: Genus Geobacillus, NADPH/NADH-dependent reductase, Selenium nanoparticles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2306
74 Reutilization of Organic and Peat Soils by Deep Cement Mixing

Authors: Bee-Lin Tang, Ismail Bakar, Chee - Ming Chan

Abstract:

Limited infrastructure development on peats and organic soils is a serious geotechnical issues common to many countries of the world especially Malaysia which distributed 1.5 mill ha of those problematic soil. These soils have high water content and organic content which exhibit different mechanical properties and may also change chemically and biologically with time. Constructing structures on peaty ground involves the risk of ground failure and extreme settlement. Nowdays, much efforts need to be done in making peatlands usable for construction due to increased landuse. Deep mixing method employing cement as binders, is generally used as measure again peaty/ organic ground failure problem. Where the technique is widely adopted because it can improved ground considerably in a short period of time. An understanding of geotechnical properties as shear strength, stiffness and compressibility behavior of these soils was requires before continues construction on it. Therefore, 1- 1.5 meter peat soil sample from states of Johor and an organic soil from Melaka, Malaysia were investigated. Cement were added to the soil in the pre-mixing stage with water cement ratio at range 3.5,7,14,140 for peats and 5,10,30 for organic soils, essentially to modify the original soil textures and properties. The mixtures which in slurry form will pour to polyvinyl chloride (pvc) tube and cured at room temperature 250C for 7,14 and 28 days. Laboratory experiments were conducted including unconfined compressive strength and bender element , to monitor the improved strength and stiffness of the 'stabilised mixed soils'. In between, scanning electron miscroscopic (SEM) were observations to investigate changes in microstructures of stabilised soils and to evaluated hardening effect of a peat and organic soils stabilised cement. This preliminary effort indicated that pre-mixing peat and organic soils contributes in gaining soil strength while help the engineers to establish a new method for those problematic ground improvement in further practical and long term applications.

Keywords: peat soils, organic soils, cement stabilisation, strength, stiffness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3257
73 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
72 Aircraft Gas Turbine Engines Technical Condition Identification System

Authors: A. M. Pashayev, C. Ardil, D. D. Askerov, R. A. Sadiqov, P. S. Abdullayev

Abstract:

In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.

Keywords: Gas turbine engines, neural networks, fuzzy logic, fuzzy statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
71 Effect of Halo Protection Device on the Aerodynamic Performance of Formula Racecar

Authors: Mark Lin, Periklis Papadopoulos

Abstract:

This paper explores the aerodynamics of the formula racecar when a ‘halo’ driver-protection device is added to the chassis. The halo protection device was introduced at the start of the 2018 racing season as a safety measure against foreign object impacts that a driver may encounter when driving an open-wheel racecar. In the one-year since its introduction, the device has received wide acclaim for protecting the driver on two separate occasions. The benefit of such a safety device certainly cannot be disputed. However, by adding the halo device to a car, it changes the airflow around the vehicle, and most notably, to the engine air-intake and the rear wing. These negative effects in the air supply to the engine, and equally to the downforce created by the rear wing are studied in this paper using numerical technique, and the resulting CFD outputs are presented and discussed. Comparing racecar design prior to and after the introduction of the halo device, it is shown that the design of the air intake and the rear wing has not followed suit since the addition of the halo device. The reduction of engine intake mass flow due to the halo device is computed and presented for various speeds the car may be going. Because of the location of the halo device in relation to the air intake, airflow is directed away from the engine, making the engine perform less than optimal. The reduction is quantified in this paper to show the correspondence to reduce the engine output when compared to a similar car without the halo device. This paper shows that through aerodynamic arguments, the engine in a halo car will not receive unobstructed, clean airflow that a non-halo car does. Another negative effect is on the downforce created by the rear wing. Because the amount of downforce created by the rear wing is influenced by every component that comes before it, when a halo device is added upstream to the rear wing, airflow is obstructed, and less is available for making downforce. This reduction in downforce is especially dramatic as the speed is increased. This paper presents a graph of downforce over a range of speeds for a car with and without the halo device. Acknowledging that although driver safety is paramount, the negative effect of this safety device on the performance of the car should still be well understood so that any possible redesign to mitigate these negative effects can be taken into account in next year’s rules regulation.

Keywords: Automotive aerodynamics, halo device, downforce. engine intake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
70 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
69 Financing - Scheduling Optimization for Construction Projects by using Genetic Algorithms

Authors: Hesham Abdel-Khalek, Sherif M. Hafez, Abdel-Hamid M. el-Lakany, Yasser Abuel-Magd

Abstract:

Investment in a constructed facility represents a cost in the short term that returns benefits only over the long term use of the facility. Thus, the costs occur earlier than the benefits, and the owners of facilities must obtain the capital resources to finance the costs of construction. A project cannot proceed without an adequate financing, and the cost of providing an adequate financing can be quite large. For these reasons, the attention to the project finance is an important aspect of project management. Finance is also a concern to the other organizations involved in a project such as the general contractor and material suppliers. Unless an owner immediately and completely covers the costs incurred by each participant, these organizations face financing problems of their own. At a more general level, the project finance is the only one aspect of the general problem of corporate finance. If numerous projects are considered and financed together, then the net cash flow requirements constitute the corporate financing problem for capital investment. Whether project finance is performed at the project or at the corporate level does not alter the basic financing problem .In this paper, we will first consider facility financing from the owner's perspective, with due consideration for its interaction with other organizations involved in a project. Later, we discuss the problems of construction financing which are crucial to the profitability and solvency of construction contractors. The objective of this paper is to present the steps utilized to determine the best combination of minimum project financing. The proposed model considers financing; schedule and maximum net area .The proposed model is called Project Financing and Schedule Integration using Genetic Algorithms "PFSIGA". This model intended to determine more steps (maximum net area) for any project with a subproject. An illustrative example will demonstrate the feature of this technique. The model verification and testing are put into consideration.

Keywords: Project Management, Large-scale ConstructionProjects, Cash flow, Interest, Investment, Loan, Optimization, Scheduling, Financing and Genetic Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
68 Gender Justice and Feminist Self-Management Practices in the Solidarity Economy: A Quantitative Analysis of the Factors that Impact Enterprises Formed by Women in Brazil

Authors: Maria de Nazaré Moraes Soares, Silvia Maria Dias Pedro Rebouças, José Carlos Lázaro

Abstract:

The Solidarity Economy (SE) acts in the re-articulation of the economic field to the other spheres of social action. The significant participation of women in SE resulted in the formation of a national network of self-managed enterprises in Brazil: The Solidarity and Feminist Economy Network (SFEN). The objective of the research is to identify factors of gender justice and feminist self-management practices that adhere to the reality of women in SE enterprises. The conceptual apparatus related to feminist studies in this research covers Nancy Fraser approaches on gender justice, and Patricia Yancey Martin approaches on feminist management practices, and authors of postcolonial feminism such as Mohanty and Maria Lugones, who lead the discussion to peripheral contexts, a necessary perspective when observing the women’s movement in SE. The research has a quantitative nature in the phases of data collection and analysis. The data collection was performed through two data sources: the database mapped in Brazil in 2010-2013 by the National Information System in Solidary Economy and 150 questionnaires with women from 16 enterprises in SFEN, in a state of Brazilian northeast. The data were analyzed using the multivariate statistical technique of Factor Analysis. The results show that the factors that define gender justice and feminist self-management practices in SE are interrelated in several levels, proving statistically the intersectional condition of the issue of women. The evidence from the quantitative analysis allowed us to understand the dimensions of gender justice and feminist management practices intersectionality; in this sense, the non-distribution of domestic work interferes in non-representation of women in public spaces, especially in peripheral contexts. The study contributes with important reflections to the studies of this area and can be complemented in the future with a qualitative research that approaches the perspective of women in the context of the SE self-management paradigm.

Keywords: Feminist management practices, gender justice, self-management, solidarity economy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 622
67 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
66 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 525