Search results for: FLUKA Monte Carlo Method
13412 Elastic Collisions of Electrons with DNA and Water From 10 eV to 100 KeV: Scar Macro Investigation
Authors: Aouina Nabila Yasmina, Zine El Abidine Chaoui
Abstract:
Recently, understanding the interactions of electrons with the DNA molecule and its components has attracted considerable interest because DNA is the main site damaged by ionizing radiation. The interactions of radiation with DNA induce a variety of molecular damage such as single-strand breaks, double-strand breaks, basic damage, cross-links between proteins and DNA, and others, or the formation of free radicals, which, by chemical reactions with DNA, can also lead to breakage of the strand. One factor that can contribute significantly to these processes is the effect of water hydration on the formation and reaction of radiation induced by these radicals in and / or around DNA. B-DNA requires about 30% by weight of water to maintain its native conformation in the crystalline state. The transformation depends on various factors such as sequence, ion composition, concentration and water activity. Partial dehydration converts it to DNA-A. The present study shows the results of theoretical calculations for positrons and electrons elastic scattering with DNA medium and water over a broad energy range from 10 eV to 100 keV. Indeed, electron elastic cross sections and elastic mean free paths are calculated using a corrected form of the independent atom method, taking into account the geometry of the biomolecule (SCAR macro). Moreover, the elastic scattering of electrons and positrons by atoms of the biomolecule was evaluated by means of relativistic (Dirac) partial wave analysis. Our calculated results are compared with theoretical data available in the literature in the absence of experimental data, in particular for positron. As a central result, our electron elastic cross sections are in good agreement with existing theoretical data in the range of 10 eV to 1 keV.Keywords: elastic cross scrion, elastic mean free path, scar macro method, electron collision
Procedia PDF Downloads 6513411 Designing an Exhaust Gas Energy Recovery Module Following Measurements Performed under Real Operating Conditions
Authors: Jerzy Merkisz, Pawel Fuc, Piotr Lijewski, Andrzej Ziolkowski, Pawel Czarkowski
Abstract:
The paper presents preliminary results of the development of an automotive exhaust gas energy recovery module. The aim of the performed analyses was to select the geometry of the heat exchanger that would ensure the highest possible transfer of heat at minimum heat flow losses. The starting point for the analyses was a straight portion of a pipe, from which the exhaust system of the tested vehicle was made. The design of the heat exchanger had a cylindrical cross-section, was 300 mm long and was fitted with a diffuser and a confusor. The model works were performed for the mentioned geometry utilizing the finite volume method based on the Ansys CFX v12.1 and v14 software. This method consisted in dividing of the system into small control volumes for which the exhaust gas velocity and pressure calculations were performed using the Navier-Stockes equations. The heat exchange in the system was modeled based on the enthalpy balance. The temperature growth resulting from the acting viscosity was not taken into account. The heat transfer on the fluid/solid boundary in the wall layer with the turbulent flow was done based on an arbitrarily adopted dimensionless temperature. The boundary conditions adopted in the analyses included the convective condition of heat transfer on the outer surface of the heat exchanger and the mass flow and temperature of the exhaust gas at the inlet. The mass flow and temperature of the exhaust gas were assumed based on the measurements performed in actual traffic using portable PEMS analyzers. The research object was a passenger vehicle fitted with a 1.9 dm3 85 kW diesel engine. The tests were performed in city traffic conditions.Keywords: waste heat recovery, heat exchanger, CFD simulation, pems
Procedia PDF Downloads 57413410 Neural Network Approach For Clustering Host Community: Based on Perceptions Toward Tourism, Their Satisfaction Level and Demographic Attributes in Iran (Lahijan)
Authors: Nasibeh Mohammadpour, Ali Rajabzadeh, Adel Azar, Hamid Zargham Borujeni,
Abstract:
Generally, various industries development depends on their stakeholders and beneficiaries supports. One of the most important stakeholders in tourism industry ( which has become one of the most important lucrative and employment-generating activities at the international level these days) are host communities in tourist destination which are affected and effect on this industry development. Recognizing host community and its segmentations can be important to get their support for future decisions and policy making. In order to identify these segments, in this study, clustering of the residents has been done by using some tools that are designed to encounter human complexities and have ability to model and generalize complex systems without any needs for the initial clusters’ seeds like classic methods. Neural networks can help to meet these expectations. The research have been planned to design neural networks-based mathematical model for clustering the host community effectively according to multi criteria, and identifies differences among segments. In order to achieve this goal, the residents’ segmentation has been done by demographic characteristics, their attitude towards the tourism development, the level of satisfaction and the type of their support in this field. The applied method is self-organized neural networks and the results have compared with K-means. As the results show, the use of Self- Organized Map (SOM) method provides much better results by considering the Cophenetic correlation and between clusters variance coefficients. Based on these criteria, the host community is divided into five sections with unique and distinctive features, which are in the best condition (in comparison other modes) according to Cophenetic correlation coefficient of 0.8769 and between clusters variance of 0.1412.Keywords: Artificial Nural Network, Clustering , Resident, SOM, Tourism
Procedia PDF Downloads 18313409 Entry Inhibitors Are Less Effective at Preventing Cell-Associated HIV-2 Infection than HIV-1
Authors: A. R. Diniz, P. Borrego, I. Bártolo, N. Taveira
Abstract:
Cell-to-cell transmission plays a critical role in the spread of HIV-1 infection in vitro and in vivo. Inhibition of HIV-1 cell-associated infection by antiretroviral drugs and neutralizing antibodies (NAbs) is more difficult compared to cell-free infection. Limited data exists on cell-associated infection by HIV-2 and its inhibition. In this work, we determined the ability of entry inhibitors to inhibit HIV-1 and HIV-2 cell-to cell fusion as a proxy to cell-associated infection. We developed a method in which Hela-CD4-cells are first transfected with a Tat expressing plasmid (pcDNA3.1+/Tat101) and infected with recombinant vaccinia viruses expressing either the HIV-1 (vPE16: from isolate HTLV-IIIB, clone BH8, X4 tropism) or HIV-2 (vSC50: from HIV-2SBL/ISY, R5 and X4 tropism) envelope glycoproteins (M.O.I.=1 PFU/cell).These cells are added to TZM-bl cells. When cell-to-cell fusion (syncytia) occurs the Tat protein diffuses to the TZM-bl cells activating the expression of a reporter gene (luciferase). We tested several entry inhibitors including the fusion inhibitors T1249, T20 and P3, the CCR5 antagonists MVC and TAK-779, the CXCR4 antagonist AMD3100 and several HIV-2 neutralizing antibodies (Nabs). All compounds inhibited HIV-1 and HIV-2 cell fusion albeit to different levels. Maximum percentage of HIV-2 inhibition (MPI) was higher for fusion inhibitors (T1249- 99.8%; P3- 95%, T20-90%) followed by co-receptor antagonists (MVC- 63%; TAK-779- 55%; AMD3100- 45%). NAbs from HIV-2 infected patients did not prevent cell fusion up to the tested concentration of 4μg/ml. As for HIV-1, MPI reached 100% with TAK-779 and T1249. For the other antivirals, MPIs were: P3-79%; T20-75%; AMD3100-61%; MVC-65%.These results are consistent with published data. Maraviroc had the lowest IC50 both for HIV-2 and HIV-1 (IC50 HIV-2= 0.06 μM; HIV-1=0.0076μM). Highest IC50 were observed with T20 for HIV-2 (3.86μM) and with TAK-779 for HIV-1 (12.64μM). Overall, our results show that entry inhibitors in clinical use are less effective at preventing Env mediated cell-to-cell-fusion in HIV-2 than in HIV-1 which suggests that cell-associated HIV-2 infection will be more difficult to inhibit compared to HIV-1. The method described here will be useful to screen for new HIV entry inhibitors.Keywords: cell-to-cell fusion, entry inhibitors, HIV, NAbs, vaccinia virus
Procedia PDF Downloads 31013408 Multi-Criteria Assessment of Biogas Feedstock
Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough
Abstract:
Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy
Procedia PDF Downloads 46213407 Comparative Study in Treatment of Distal Humerus Fracture with Lateral Column Plate Percutaneous Medial Screw and Intercondylar Screw
Authors: Sameer Gupta, Prant Gupta
Abstract:
Context: Fractures in the distal humerus are complex and challenging injuries for orthopaedic surgeons that can be effectively treated with open reduction and internal fixation. Aims: The study analyses clinical outcomes in patients with intra-articular distal humerus fractures (AO type 13 C3 excluded) treated using a different method of fixation ( LCPMS). Subject and Methods: A study was performed, and the author's personal experiences were reported. Thirty patients were treated using an intercondylar screw with lateral column plating and percutaneous medial column screw fixation. Detailed analysis was done for functional outcomes (average arc of motion, union rate, and complications). Statistical Analysis Used: SPSS software version 22.0 was used for statistical analysis. Results: In our study, at the end of 6 months, Overall good to excellent results were achieved in 28 patients out of 30 after analysis on the basis of MEP score. The majority of patients regained full arc of motion, achieved fracture union without any major complications, and were able to perform almost all activities of daily living (which required good elbow joint movements and functions). Conclusion: We concluded that this novel method provides adequate stability and anatomical reconstruction with an early union rate observed at the end of 6 months. Excellent functional outcome was observed in almost all the patients because of less operating time and initiation of early physiotherapy, as most of the patients experienced mild nature of pain post-surgery.Keywords: intra arricular distal humerus fracture, percutaneous medial screw, lateral column plate, arc of motion
Procedia PDF Downloads 6013406 Applying Wavelet Transform to Ferroresonance Detection and Protection
Authors: Chun-Wei Huang, Jyh-Cherng Gu, Ming-Ta Yang
Abstract:
Non-synchronous breakage or line failure in power systems with light or no loads can lead to core saturation in transformers or potential transformers. This can cause component and capacitance matching resulting in the formation of resonant circuits, which trigger ferroresonance. This study employed a wavelet transform for the detection of ferroresonance. Simulation results demonstrate the efficacy of the proposed method.Keywords: ferroresonance, wavelet transform, intelligent electronic device, transformer
Procedia PDF Downloads 49613405 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data
Authors: Salihah Alghamdi, Surajit Ray
Abstract:
Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray
Procedia PDF Downloads 14113404 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 10913403 Fabrication and Characterization of Ceramic Matrix Composite
Authors: Yahya Asanoglu, Celaletdin Ergun
Abstract:
Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.Keywords: CMC, PIP, precursor, quartz
Procedia PDF Downloads 16013402 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers
Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei
Abstract:
Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce
Procedia PDF Downloads 26513401 Compensation Strategies and Their Effects on Employees' Motivation and Organizational Citizenship Behaviour in Some Manufacturing Companies in Lagos, Nigeria
Authors: Ade Oyedijo
Abstract:
This paper reports the findings of a study on the strategic and organizational antecedents and effects of two opposing pay patterns used by some manufacturing companies in Lagos Nigeria with particular reference to the behavioural correlates of the pay strategies considered. The assumed relationship between pay strategies and some organizational correlates such as business and corporate strategies and firm size was considered problematic in view of their likely implications for employee motivation and citizenship behaviour and firm performance. The survey research method was used for the study. Structured, close ended questions were used to collect primary data from the respondents. A multipart Likert scale was used to measure the pay orientations of the respondent firms and the job and organizational involvement of the respondent employees. Utilizing hierarchical linear regression method and "t-test" to analyze the data obtained from 48 manufacturing companies of various sizes and strategies, it was found that the dominant pattern of employee compensation in the sampled manufacturing companies. The study also revealed that the choice of a pay strategy was strongly influenced by organizational size as well as the type of business and corporate level strategies adopted by afirm. Firms pursuing a strategy of related and unrelated diversification are more likely to adopt the algorithmic compensation system than single product firms because of their relatively larger size and scope. However; firms that pursue a competitive advantage through a business level strategy of cost efficiency are more likely to use the experiential, variable pay strategy. The study found that an algorithmic compensation strategy is as effective as experiential compensation strategy in the promotion of organizational citizenship behaviour and motivation of employees.Keywords: compensation, corporate strategy, business strategy, motivation, citizenship behaviour, algorithmic, experiential, organizational commitment, work environment
Procedia PDF Downloads 39113400 Pre-Service Mathematics Teachers’ Mental Construction in Solving Equations and Inequalities Using ACE Teaching Cycle
Authors: Abera Kotu, Girma Tesema, Mitiku Tadesse
Abstract:
This study investigated ACE supported instruction and pre-service mathematics teachers’ mental construction in solving equations and inequalities. A mixed approach with concurrent parallel design was employed. It was conducted on two intact groups of regular first-year pre-service mathematics teachers at Fiche College of Teachers’ Education in which one group was assigned as an intervention group and the other group as a comparison group using the lottery method. There were 33 participants in the intervention and 32 participants in the comparison. Six pre-service mathematics teachers were selected for interview using purposive sampling based on pre-test results. An instruction supported with ACE cycle was given to the intervention group for two weeks duration of time. Written tasks, interviews, and observations were used to collect data. Data collected from written tasks were analyzed quantitatively using independent samples t-test and effect size. Data collected from interviews and observations were analyzed narratively. The findings of the study uncovered that ACE-supported instruction has a moderate effect on Pre-service Mathematics Teachers’ levels of conceptualizations of action, process, object, ad schema. Moreover, the ACE supported group out scored and performed better than the usual traditional method supported groups across the levels of conceptualization. The majority of pre-service mathematics teachers’ levels of conceptualizations were at action and process levels and their levels of conceptualization were linked with genetic decomposition more at action and object levels than object and schema. The use of ACE supported instruction is recommended to improve pre-service mathematics teachers’ mental construction.Keywords: ACE teaching cycle, APOS theory, mental construction, genetic composition
Procedia PDF Downloads 1613399 Bioelectronic System for Continuous Monitoring of Cardiac Activity of Benthic Invertebrates for the Assessment of a Surface Water Quality
Authors: Sergey Kholodkevich, Tatiana Kuznetsova
Abstract:
The objective assessment of ecological state of water ecosystems is impossible without the use of biological methods of the environmental monitoring capable in the integrated look to reveal negative for biota changes of quality of water as habitats. Considerable interest for the development of such methods of environmental quality control represents biomarker approach. Measuring systems, by means of which register cardiac activity characteristics, received the name of bioelectronic. Bioelectronic systems are information and measuring systems in which animals (namely, benthic invertebrates) are directly included in structure of primary converters, being an integral part of electronic system of registration of these or those physiological or behavioural biomarkers. As physiological biomarkers various characteristics of cardiac activity of selected invertebrates have been used in bioelectronic system.lChanges in cardiac activity are considered as integrative measures of the physiological condition of organisms, which reflect the state of the environment of their dwelling. Greatest successes in the development of tools of biological methods and technologies of an assessment of surface water quality in real time. Essential advantage of bioindication of water quality by such tool is a possibility of an integrated assessment of biological effects of pollution on biota and also the expressness of such method and used approaches. In the report the practical experience of authors in biomonitoring and bioindication of an ecological condition of sea, brackish- and freshwater areas is discussed. Authors note that the method of non-invasive cardiac activity monitoring of selected invertebrates can be used not only for the advancement of biomonitoring, but also is useful in decision of general problems of comparative physiology of the invertebrates.Keywords: benthic invertebrates, physiological state, heart rate monitoring, water quality assessment
Procedia PDF Downloads 71813398 Generation of Research Ideas Through a Matrix in the Field of International Comparative Education
Authors: Saleh Alzahrani
Abstract:
The studies in the field of International Comparative Education in the Arabic world and the middle east are scarcity. However, some International Comparative Education Researchers and post graduates face a challenge concerning of a selection of a distinguished study to improve their national education system. It requires a considerable effort. According to that, the matrix of scientific research in comparative and international education is designed to help specialists, researchers and graduate students in generating a variety of research ideas in a short time in this field. The matrix is built by using content analysis method of comparative education research published in the Arab journals from 1980 to 2017. Then, qualitative input with the in-depth focus analysis tool is utilized according to the root theory. The matrix consists of two axes; vertical (X) and horizontal (Y). The number of fields in the vertical axis are 6 domains, including 105 variables. The horizontal axis is two fields which are pre-university education that incorporate educational stages and contemporary formulations including (23) variables. The second field is the university education in its public universities and contemporary formulas including (15) variables. The researcher can access topics, ideas and research points through the matrix of scientific research in comparative and international education by selecting of any subject on the vertical axis (X) from (1) to (105) and selecting of any subject on the horizontal axis (Y) from (B) to (U). The cell where the axes intersect with the chosen fields can generate an idea or a research point conveniently and easily through the words that have been monitored by the user. These steps can be repeated to generate new ideas and research points. Many graduate researchers have been trained on using of this matrix which gave them more potential to generate an appropriate study serving the national education.Keywords: content analysis method, comparative education, international education, matrix, root theory
Procedia PDF Downloads 13313397 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 51313396 Hypersonic Flow of CO2-N2 Mixture around a Spacecraft during the Atmospheric Reentry
Authors: Zineddine Bouyahiaoui, Rabah Haoui
Abstract:
The aim of this work is to analyze a flow around the axisymmetric blunt body taken into account the chemical and vibrational nonequilibrium flow. This work concerns the entry of spacecraft in the atmosphere of the planet Mars. Since the equations involved are non-linear partial derivatives, the volume method is the only way to solve this problem. The choice of the mesh and the CFL is a condition for the convergence to have the stationary solution.Keywords: blunt body, finite volume, hypersonic flow, viscous flow
Procedia PDF Downloads 23413395 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22413394 Increased Cytolytic Activity of Effector T-Cells against Cholangiocarcinoma Cells by Self-Differentiated Dendritic Cells with Down-Regulation of Interleukin-10 and Transforming Growth Factor-β Receptors
Authors: Chutamas Thepmalee, Aussara Panya, Mutita Junking, Jatuporn Sujjitjoon, Nunghathai Sawasdee, Pa-Thai Yenchitsomanus
Abstract:
Cholangiocarcinoma (CCA) is an aggressive malignancy of bile duct epithelial cells in which the standard treatments, including surgery, radiotherapy, chemotherapy, and targeted therapy are partially effective. Many solid tumors including CCA escape host immune responses by creating tumor microenvironment and generating immunosuppressive cytokines such as interleukin-10 (IL-10) and transforming growth factor-β (TGF-β). These cytokines can inhibit dendritic cell (DC) differentiation and function, leading to decreased activation and response of effector CD4+ and CD8+ T cells for cancer cell elimination. To overcome the effects of these immunosuppressive cytokines and to increase ability of DC to activate effector CD4+ and CD8+ T cells, we generated self-differentiated DCs (SD-DCs) with down-regulation of IL-10 and TGF-β receptors for activation of effector CD4+ and CD8+ T cells. Human peripheral blood monocytes were initially transduced with lentiviral particles containing the genes encoding GM-CSF and IL-4 and then secondly transduced with lentiviral particles containing short-hairpin RNAs (shRNAs) to knock-down mRNAs of IL-10 and TGF-β receptors. The generated SD-DCs showed up-regulation of MHC class II (HLA-DR) and co-stimulatory molecules (CD40 and CD86), comparable to those of DCs generated by convention method. Suppression of IL-10 and TGF-β receptors on SD-DCs by specific shRNAs significantly increased levels of IFN-γ and also increased cytolytic activity of DC-activated effector T cells against CCA cell lines (KKU-213 and KKU-100), but it had little effect to immortalized cholangiocytes (MMNK-1). Thus, SD-DCs with down-regulation of IL-10 and TGF-β receptors increased activation of effector T cells, which is a recommended method to improve DC function for the preparation of DC-activated effector T cells for adoptive T-cell therapy.Keywords: cholangiocarcinoma, IL-10 receptor, self-differentiated dendritic cells, TGF-β receptor
Procedia PDF Downloads 14113393 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 19013392 Value Chain Network: A Social Network Analysis of the Value Chain Actors of Recycled Polymer Products in Lagos Metropolis, Nigeria
Authors: Olamide Shittu, Olayinka Akanle
Abstract:
Value Chain Analysis is a common method of examining the stages involved in the production of a product, mostly agricultural produce, from the input to the consumption stage including the actors involved in each stage. However, the Functional Institutional Analysis is the most common method in literature employed to analyze the value chain of products. Apart from studying the relatively neglected phenomenon of recycled polymer products in Lagos Metropolis, this paper adopted the use of social network analysis to attempt a grounded theory of the nature of social network that exists among the value chain actors of the subject matter. The study adopted a grounded theory approach by conducting in-depth interviews, administering questionnaires and conducting observations among the identified value chain actors of recycled polymer products in Lagos Metropolis, Nigeria. The thematic analysis of the collected data gave the researchers the needed background to formulate a truly representative network of the social relationships among the value chain actors of recycled polymer products in Lagos Metropolis. The paper introduced concepts such as Transient and Perennial Social Ties to explain the observed social relations among the actors. Some actors have more social capital than others as a result of the structural holes that exist in their triad network. Households and resource recoverers are at disadvantaged position in the network as they have high constraints in their relationships with other actors. The study attempted to provide a new perspective in the study of the environmental value chain by analyzing the network of actors to bring about policy action points and improve recycling in Nigeria. Government and social entrepreneurs can exploit the structural holes that exist in the network for the socio-economic and sustainable development of the state.Keywords: recycled polymer products, social network analysis, social ties, value chain analysis
Procedia PDF Downloads 41013391 Experimental Modal Analysis of a Suspended Composite Beam
Authors: First A. Lahmar Lahbib, Second B. Abdeldjebar Rabiâ, Third C. Moudden B, forth D. Missoum L
Abstract:
Vibration tests are used to identify the elasticity modulus in two directions. This strategy is applied to composite materials glass / polyester. Experimental results made on a specimen in free vibration showed the efficiency of this method. Obtained results were validated by a comparison to results stemming from static tests.Keywords: beam, characterization, composite, elasticity modulus, vibration.
Procedia PDF Downloads 46313390 Fabricating Method for Complex 3D Microfluidic Channel Using Soluble Wax Mold
Authors: Kyunghun Kang, Sangwoo Oh, Yongha Hwang
Abstract:
PDMS (Polydimethylsiloxane)-based microfluidic device has been recently applied to area of biomedical research, tissue engineering, and diagnostics because PDMS is low cost, nontoxic, optically transparent, gas-permeable, and especially biocompatible. Generally, PDMS microfluidic devices are fabricated by conventional soft lithography. Microfabrication requires expensive cleanroom facilities and a lot of time; however, only two-dimensional or simple three-dimensional structures can be fabricated. In this study, we introduce fabricating method for complex three-dimensional microfluidic channels using soluble wax mold. Using the 3D printing technique, we firstly fabricated three-dimensional mold which consists of soluble wax material. The PDMS pre-polymer is cast around, followed by PDMS casting and curing. The three-dimensional casting mold was removed from PDMS by chemically dissolved with methanol and acetone. In this work, two preliminary experiments were carried out. Firstly, the solubility of several waxes was tested using various solvents, such as acetone, methanol, hexane, and IPA. We found the combination between wax and solvent which dissolves the wax. Next, side effects of the solvent were investigated during the curing process of PDMS pre-polymer. While some solvents let PDMS drastically swell, methanol and acetone let PDMS swell only 2% and 6%, respectively. Thus, methanol and acetone can be used to dissolve wax in PDMS without any serious impact. Based on the preliminary tests, three-dimensional PDMS microfluidic channels was fabricated using the mold which was printed out using 3D printer. With the proposed fabricating technique, PDMS-based microfluidic devices have advantages of fast prototyping, low cost, optically transparence, as well as having complex three-dimensional geometry. Acknowledgements: This research was supported by Supported by a Korea University Grant and Basic Science Research Program through the National Research Foundation of Korea(NRF).Keywords: microfluidic channel, polydimethylsiloxane, 3D printing, casting
Procedia PDF Downloads 27413389 A Low Cost Education Proposal Using Strain Gauges and Arduino to Develop a Balance
Authors: Thais Cavalheri Santos, Pedro Jose Gabriel Ferreira, Alexandre Daliberto Frugoli, Lucio Leonardo, Pedro Americo Frugoli
Abstract:
This paper presents a low cost education proposal to be used in engineering courses. The engineering education in universities of a developing country that is in need of an increasing number of engineers carried out with quality and affordably, pose a difficult problem to solve. In Brazil, the political and economic scenario requires academic managers able to reduce costs without compromising the quality of education. Within this context, the elaboration of a physics principles teaching method with the construction of an electronic balance is proposed. First, a method to develop and construct a load cell through which the students can understand the physical principle of strain gauges and bridge circuit will be proposed. The load cell structure was made with aluminum 6351T6, in dimensions of 80 mm x 13 mm x 13 mm and for its instrumentation, a complete Wheatstone Bridge was assembled with strain gauges of 350 ohms. Additionally, the process involves the use of a software tool to document the prototypes (design circuits), the conditioning of the signal, a microcontroller, C language programming as well as the development of the prototype. The project also intends to use an open-source I/O board (Arduino Microcontroller). To design the circuit, the Fritizing software will be used and, to program the controller, an open-source software named IDE®. A load cell was chosen because strain gauges have accuracy and their use has several applications in the industry. A prototype was developed for this study, and it confirmed the affordability of this educational idea. Furthermore, the goal of this proposal is to motivate the students to understand the several possible applications in high technology of the use of load cells and microcontroller.Keywords: Arduino, load cell, low-cost education, strain gauge
Procedia PDF Downloads 30313388 Gaming Mouse Redesign Based on Evaluation of Pragmatic and Hedonic Aspects of User Experience
Authors: Thedy Yogasara, Fredy Agus
Abstract:
In designing a product, it is currently crucial to focus not only on the product’s usability based on performance measures, but also on user experience (UX) that includes pragmatic and hedonic aspects of product use. These aspects play a significant role in fulfillment of user needs, both functionally and psychologically. Pragmatic quality refers to as product’s perceived ability to support the fulfillment of behavioral goals. It is closely linked to functionality and usability of the product. In contrast, hedonic quality is product’s perceived ability to support the fulfillment of psychological needs. Hedonic quality relates to the pleasure of ownership and use of the product, including stimulation for personal development and communication of user’s identity to others through the product. This study evaluates the pragmatic and hedonic aspects of gaming mice G600 and Razer Krait using AttrakDiff tool to create an improved design that is able to generate positive UX. AttrakDiff is a method that measures pragmatic and hedonic scores of a product with a scale between -3 to +3 through four attributes (i.e. Pragmatic Quality, Hedonic Quality-Identification, Hedonic Quality-Stimulation, and Attractiveness), represented by 28 pairs of opposite words. Based on data gathered from 15 participants, it is identified that gaming mouse G600 needs to be redesigned because of its low grades (pragmatic score: -0.838, hedonic score: 1, attractiveness score: 0.771). The redesign process focuses on the attributes with poor scores and takes into account improvement suggestions collected from interview with the participants. The redesigned mouse G600 is evaluated using the previous method. The result shows higher scores in pragmatic quality (1.929), hedonic quality (1.703), and attractiveness (1.667), indicating that the redesigned mouse is more capable of creating pleasurable experience of product use.Keywords: AttrakDiff, hedonic aspect, pragmatic aspect, product design, user experience
Procedia PDF Downloads 15713387 A Paradigm Shift in Patent Protection-Protecting Methods of Doing Business: Implications for Economic Development in Africa
Authors: Odirachukwu S. Mwim, Tana Pistorius
Abstract:
Since the early 1990s political and economic pressures have been mounted on policy and law makers to increase patent protection by raising the protection standards. The perception of the relation between patent protection and development, particularly economic development, has evolved significantly in the past few years. Debate on patent protection in the international arena has been significantly influenced by the perception that there is a strong link between patent protection and economic development. The level of patent protection determines the extent of development that can be achieved. Recently there has been a paradigm shift with a lot of emphasis on extending patent protection to method of doing business generally referred to as Business Method Patenting (BMP). The general perception among international organizations and the private sectors also indicates that there is a strong correlation between BMP protection and economic growth. There are two diametrically opposing views as regards the relation between Intellectual Property (IP) protection and development and innovation. One school of thought promotes the view that IP protection improves economic development through stimulation of innovation and creativity. The other school advances the view that IP protection is unnecessary for stimulation of innovation and creativity and is in fact a hindrance to open access to resources and information required for innovative and creative modalities. Therefore, different theories and policies attach different levels of protection to BMP which have specific implications for economic growth. This study examines the impact of BMP protection on development by focusing on the challenges confronting economic growth in African communities as a result of the new paradigm in patent law. (Africa is used as a single unit in this study but this should not be construed as African homogeneity. Rather, the views advanced in this study are used to address the common challenges facing many communities in Africa). The study reviews (from the point of views of legal philosophers, policy makers and decisions of competent courts) the relevant literature, patent legislation particularly the International Treaty, policies and legal judgments. Findings from this study suggest that over and above the various criticisms levelled against the extreme liberal approach to the recognition of business methods as patentable subject matter, there are other specific implications that are associated with such approach. The most critical implication of extending patent protection to business methods is the locking-up of knowledge which may hamper human development in general and economic development in particular. Locking up knowledge necessary for economic advancement and competitiveness may have a negative effect on economic growth by promoting economic exclusion, particularly in African communities. This study suggests that knowledge of BMP within the African context and the extent of protection linked to it is crucial in achieving a sustainable economic growth in Africa. It also suggests that a balance is struck between the two diametrically opposing views.Keywords: Africa, business method patenting, economic growth, intellectual property, patent protection
Procedia PDF Downloads 12713386 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis
Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath
Abstract:
The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression
Procedia PDF Downloads 19813385 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 23913384 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 10313383 The Phenomenon of Rockfall in the Traceca Corridor and the Choice of Engineering Measures to Combat It
Authors: I. Iremashvili, I. Pirtskhalaishvili, K. Kiknadze, F. Lortkipanidze
Abstract:
The paper deals with the causes of rockfall and its possible consequences on slopes adjacent to motorways and railways. A list of measures is given that hinder rockfall; these measures are directed at protecting roads from rockfalls, and not preventing them. From the standpoint of local stability of slopes the main effective measure is perhaps strengthening their surface by the method of filling, which will check or end (or both) the process of deformation, local slipping off, sliding off and development of erosion.Keywords: rockfall, concrete spraying, heliodevices, railways
Procedia PDF Downloads 374