Search results for: mathematical row of numbers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2883

Search results for: mathematical row of numbers

513 Controlled Release of Glucosamine from Pluronic-Based Hydrogels for the Treatment of Osteoarthritis

Authors: Papon Thamvasupong, Kwanchanok Viravaidya-Pasuwat

Abstract:

Osteoarthritis affects a lot of people worldwide. Local injection of glucosamine is one of the alternative treatment methods to replenish the natural lubrication of cartilage. However, multiple injections can potentially lead to possible bacterial infection. Therefore, a drug delivery system is desired to reduce the frequencies of injections. A hydrogel is one of the delivery systems that can control the release of drugs. Thermo-reversible hydrogels can be beneficial to the drug delivery system especially in the local injection route because this formulation can change from liquid to gel after getting into human body. Once the gel is in the body, it will slowly release the drug in a controlled manner. In this study, various formulations of Pluronic-based hydrogels were synthesized for the controlled release of glucosamine. One of the challenges of the Pluronic controlled release system is its fast dissolution rate. To overcome this problem, alginate and calcium sulfate (CaSO4) were added to the polymer solution. The characteristics of the hydrogels were investigated including the gelation temperature, gelation time, hydrogel dissolution and glucosamine release mechanism. Finally, a mathematical model of glucosamine release from Pluronic-alginate-hyaluronic acid hydrogel was developed. Our results have shown that crosslinking Pluronic gel with alginate did not significantly extend the dissolution rate of the gel. Moreover, the gel dissolution profiles and the glucosamine release mechanisms were best described using the zeroth-order kinetic model, indicating that the release of glucosamine was primarily governed by the gel dissolution.

Keywords: controlled release, drug delivery system, glucosamine, pluronic, thermoreversible hydrogel

Procedia PDF Downloads 262
512 Investigation of Crack Formation in Ordinary Reinforced Concrete Beams and in Beams Strengthened with Carbon Fiber Sheet: Theory and Experiment

Authors: Anton A. Bykov, Irina O. Glot, Igor N. Shardakov, Alexey P. Shestakov

Abstract:

This paper presents the results of experimental and theoretical investigations of the mechanisms of crack formation in reinforced concrete beams subjected to quasi-static bending. The boundary-value problem has been formulated in the framework of brittle fracture mechanics and has been solved by using the finite-element method. Numerical simulation of the vibrations of an uncracked beam and a beam with cracks of different size serves to determine the pattern of changes in the spectrum of eigenfrequencies observed during crack evolution. Experiments were performed on the sequential quasistatic four-point bending of the beam leading to the formation of cracks in concrete. At each loading stage, the beam was subjected to an impulse load to induce vibrations. Two stages of cracking were detected. At the first stage the conservative process of deformation is realized. The second stage is an active cracking, which is marked by a sharp change in eingenfrequencies. The boundary of a transition from one stage to another is well registered. The vibration behavior was examined for the beams strengthened by carbon-fiber sheet before loading and at the intermediate stage of loading after the grouting of initial cracks. The obtained results show that the vibrodiagnostic approach is an effective tool for monitoring of cracking and for assessing the quality of measures aimed at strengthening concrete structures.

Keywords: crack formation, experiment, mathematical modeling, reinforced concrete, vibrodiagnostics

Procedia PDF Downloads 295
511 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry

Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay

Abstract:

The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.

Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers

Procedia PDF Downloads 398
510 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 65
509 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm

Authors: Anuradha Chug, Sunali Gandhi

Abstract:

Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.

Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm

Procedia PDF Downloads 365
508 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 210
507 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems

Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos

Abstract:

Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.

Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system

Procedia PDF Downloads 176
506 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach

Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton

Abstract:

Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.

Keywords: educational partnerships, science education, science outreach, teachers

Procedia PDF Downloads 114
505 In vivo Estimation of Mutation Rate of the Aleutian Mink Disease Virus

Authors: P.P. Rupasinghe, A.H. Farid

Abstract:

The Aleutian mink disease virus (AMDV, Carnivore amdoparvovirus 1) causes persistent infection, plasmacytosis, and formation and deposition of immune complexes in various organs in adult mink, leading to glomerulonephritis, arteritis and sometimes death. The disease has no cure nor an effective vaccine, and identification and culling of mink positive for anti-AMDV antibodies have not been successful in controlling the infection in many countries. The failure to eradicate the virus from infected farms may be caused by keeping false-negative individuals on the farm, virus transmission from wild animals, or neighboring farms. The identification of sources of infection, which can be performed by comparing viral sequences, is important in the success of viral eradication programs. High mutation rates could cause inaccuracies when viral sequences are used to trace back an infection to its origin. There is no published information on the mutation rate of AMDV either in vivo or in vitro. The in vivo estimation is the most accurate method, but it is difficult to perform because of the inherent technical complexities, namely infecting live animals, the unknown numbers of viral generations (i.e., infection cycles), the removal of deleterious mutations over time and genetic drift. The objective of this study was to determine the mutation rate of AMDV on which no information was available. A homogenate was prepared from the spleen of one naturally infected American mink (Neovison vison) from Nova Scotia, Canada (parental template). The near full-length genome of this isolate (91.6%, 4,143 bp) was bidirectionally sequenced. A group of black mink was inoculated with this homogenate (descendant mink). Spleen sampled were collected from 10 descendant mink after 16 weeks post-inoculation (wpi) and from anther 10 mink after 176 wpi, and their near-full length genomes were bi-directionally sequenced. Sequences of these mink were compared with each other and with the sequence of the parental template. The number of nucleotide substitutions at 176 wpi was 3.1 times greater than that at 16 wpi (113 vs 36) whereas the estimates of mutation rate at 176 wpi was 3.1 times lower than that at 176 wpi (2.85×10-3 vs 9.13×10-4 substitutions/ site/ year), showing a decreasing trend in the mutation rate per unit of time. Although there is no report on in vivo estimate of the mutation rate of DNA viruses in animals using the same method which was used in the current study, these estimates are at the higher range of reported values for DNA viruses determined by various techniques. These high estimates are logical based on the wide range of diversity and pathogenicity of AMDV isolates. The results suggest that increases in the number of nucleotide substitutions over time and subsequent divergence make it difficult to accurately trace back AMDV isolates to their origin when several years elapsed between the two samplings.

Keywords: Aleutian mink disease virus, American mink, mutation rate, nucleotide substitution

Procedia PDF Downloads 118
504 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 133
503 Hydrogen-Fueled Micro-Thermophotovoltaic Power Generator: Flame Regimes and Flame Stability

Authors: Hosein Faramarzpour

Abstract:

This work presents the optimum operational conditions for a hydrogen-based micro-scale power source, using a verified mathematical model including fluid dynamics and reaction kinetics. Thereafter the stable operational flame regime is pursued as a key factor in optimizing the design of micro-combustors. The results show that with increasing velocities, four H2 flame regimes develop in the micro-combustor, namely: 1) periodic ignition-extinction regime, 2) steady symmetric regime, 3) pulsating asymmetric regime, and 4) steady asymmetric regime. The first regime that appears in 0.8 m/s inlet velocity is a periodic ignition-extinction regime which is characterized by counter flows and tulip-shape flames. For flow velocity above 0.2 m/s, the flame shifts downstream, and the combustion regime switches to a steady symmetric flame where temperature increases considerably due to the increased rate of incoming energy. Further elevation in flow velocity up to 1 m/s leads to the pulsating asymmetric flame formation, which is associated with pulses in various flame properties such as temperature and species concentration. Further elevation in flow velocity up to 1 m/s leads to the pulsating asymmetric flame formation, which is associated with pulses in various flame properties such as temperature and species concentration. Ultimately, when the inlet velocity reached 1.2 m/s, the last regime was observed, and a steady asymmetric regime appeared.

Keywords: thermophotovoltaic generator, micro combustor, micro power generator, combustion regimes, flame dynamic

Procedia PDF Downloads 89
502 Design and Development of Optical Sensor Based Ground Reaction Force Measurement Platform for GAIT and Geriatric Studies

Authors: K. Chethana, A. S. Guru Prasad, S. N. Omkar, B. Vadiraj, S. Asokan

Abstract:

This paper describes an ab-initio design, development and calibration results of an Optical Sensor Ground Reaction Force Measurement Platform (OSGRFP) for gait and geriatric studies. The developed system employs an array of FBG sensors to measure the respective ground reaction forces from all three axes (X, Y and Z), which are perpendicular to each other. The novelty of this work is two folded. One is in its uniqueness to resolve the tri axial resultant forces during the stance in to the respective pure axis loads and the other is the applicability of inherently advantageous FBG sensors which are most suitable for biomechanical instrumentation. To validate the response of the FBG sensors installed in OSGRFP and to measure the cross sensitivity of the force applied in other directions, load sensors with indicators are used. Further in this work, relevant mathematical formulations are presented for extracting respective ground reaction forces from wavelength shifts/strain of FBG sensors on the OSGRFP. The result of this device has implications in understanding the foot function, identifying issues in gait cycle and measuring discrepancies between left and right foot. The device also provides a method to quantify and compare relative postural stability of different subjects under test, which has implications in post surgical rehabilitation, geriatrics and optimizing training protocols for sports personnel.

Keywords: balance and stability, gait analysis, FBG applications, optical sensor ground reaction force platform

Procedia PDF Downloads 396
501 Interdisciplinarity as a Regular Pedagogical Practice in the Classrooms

Authors: Catarina Maria Neto Da Cruz, Ana Maria Reis D’Azevedo Breda

Abstract:

The world is changing and, consequently, the young people need more sophisticated tools and skills to lead with the world’s complexity. The Organisation for Economic Co-operation and Development Learning Framework 2030 suggests an interdisciplinary knowledge as a principle for the future of education systems. In the curricular document Portuguese about the profile of students leaving compulsory education, the critical thinking and creative thinking are pointed out as skills to be developed, which imply the interconnection of different knowledge, applying it in different contexts and learning areas. Unlike primary school teachers, teachers specialized in a specific area lead to more difficulties in the implementation of interdisciplinary approaches in the classrooms and, despite the effort, the interdisciplinarity is not a common practice in schools. Statement like "Mathematics is everywhere" is unquestionable, however, many math teachers show difficulties in presenting such evidence in their classes. Mathematical modelling and problems in real contexts are promising in the development of interdisciplinary pedagogical practices and in Portugal there is a continuous training offer to contribute to the development of teachers in terms of their pedagogical approaches. But when teachers find themselves in the classroom, without a support, do they feel able to implement interdisciplinary practices? In this communication we will try to approach this issue through a case study involving a group of Mathematics teachers, who attended a training aimed at stimulating interdisciplinary practices in real contexts, namely related to the COVID-19 pandemic.

Keywords: education, mathematics, teacher training, interdisciplinarity

Procedia PDF Downloads 79
500 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts

Authors: Marco Sewald

Abstract:

Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.

Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship

Procedia PDF Downloads 194
499 Use of PACER Application as Physical Activity Assessment Tool: Results of a Reliability and Validity Study

Authors: Carine Platat, Fatima Qshadi, Ghofran Kayed, Nour Hussein, Amjad Jarrar, Habiba Ali

Abstract:

Nowadays, smartphones are very popular. They are offering a variety of easy-to-use and free applications among which step counters and fitness tests. The number of users is huge making of such applications a potentially efficient new strategy to encourage people to become more active. Nonetheless, data on their reliability and validity are very scarce and when available, they are often negative and contradictory. Besides, weight status, which is likely to introduce a bias in the physical activity assessment, was not often considered. Hence, the use of these applications as motivational tool, assessment tool and in research is questionable. PACER is one of the free step counters application. Even though it is one of the best rated free application by users, it has never been tested for reliability and validity. Prior any use of PACER, this remains to be investigated. The objective of this work is to investigate the reliability and validity of the smartphone application PACER in measuring the number of steps and in assessing the cardiorespiratory fitness by the 6 minutes walking test. 20 overweight or obese students (10 male and 10 female) were recruited at the United Arab Emirate University, aged between 18 and 25 years old. Reliability and validity were tested in real life conditions and in controlled conditions by using a treadmill. Test-retest experiments were done with PACER on 2 days separated by a week in real life conditions (24 hours each time) and in controlled conditions (30 minutes on treadmill, 3km/h). Validity was tested against the pedometer OMRON in the same conditions. During treadmill test, video was recorded and steps numbers were compared between PACER, pedometer and video. The validity of PACER in estimating the cardiorespiratory fitness (VO2max) as part of the 6 minutes walking test (6MWT) was studied against the 20m shuttle running test. Reliability was studied by calculating intraclass correlation coefficients (ICC), 95% confidence interval (95%CI) and by Bland-Altman plots. Validity was studied by calculating Spearman correlation coefficient (rho) and Bland-Altman plots. PACER reliability was good in both male and female in real life conditions (p≤10-3) but only in female in controlled conditions (p=0.01). PACER was valid against OMRON pedometer in male and female in real life conditions (rho=0.94, p≤10-3 ; rho=0.64, p=0.01, in male and female respectively). In controlled conditions, PACER was not valid against pedometer. But, PACER was valid against video in female (rho=0.72, p≤10-3). PACER was valid against the shuttle run test in male and female (rho-=0.66, p=0.01 ; rho=0.51, p=0.04) to estimate VO2max. This study provides data on the reliability and viability of PACER in overweight or obese male and female young adults. Globally, PACER was shown as reliable and valid in real life conditions in overweight or obese male and female to count steps and assess fitness. This supports the use of PACER to assess and promote physical activity in clinical follow-up and community interventions.

Keywords: smartphone application, pacer, reliability, validity, steps, fitness, physical activity

Procedia PDF Downloads 440
498 Provide Adequate Protection to Avoid Secondary Victimization: Ensuring the Rights of the Child Victims in the Criminal Justice System

Authors: Muthukuda Arachchige Dona Shiroma Jeeva Shirajanie Niriella

Abstract:

The necessity of protection of the rights of victims of crime is a matter of concerns today. In the criminal justice system, child victims who are subjected to sexual abuse/violence are more vulnerable than the other crime victims. When they go to the police to lodge the complaint and until the end of the court proceedings, these victims are re-victimized in the criminal justice system. The rights of the suspects, accused and convicts are recognized and guaranteed by the constitution under fair trial norm, contemporary penal laws where crime is viewed as an offence against the State and existing criminal justice system in many jurisdictions including Sri Lanka. In this backdrop, a reasonable question arises as to whether the existing criminal justice system, especially which follow the adversarial mode of judicial trial protect the fair trial norm in the criminal justice process. Therefore, this paper intends to discuss the rights of the sexually abused child victims in the criminal justice system in order to restore imbalance between the rights of the wrongdoer and victim and suggest legal reforms to strengthen their rights in the criminal justice system which is essential to end secondary victimization. The paper considers Sri Lanka as a sample to discuss this issue. The paper looks at how the child victims are marginalized in the traditional adversarial model of the justice process, whether the contemporary penal laws adequately protect the right of these victims and whether the current laws set out the provisions to provide sufficient assistance and protection to them. The study further deals with the important principles adopted in international human rights law relating to the protection of the rights of the child victims in sexual offences cases. In this research paper, rights of the child victims in the investigation, trial and post-trial stages in the criminal justice process will be assessed. This research contains an extensive scrutiny of relevant international standards and local statutory provisions. Case law, books, journal articles, government publications such as commissions’ reports under this topic are rigorously reviewed as secondary resources. Further, randomly selected 25 child victims of sexual offences from the decided cases in last two years, police officers from 5 police divisions where the highest numbers of sexual offences were reported in last two years and the judicial officers both Magistrates and High Court Judges from the same judicial zones are interviewed. These data will be analyzed in order to find out the reasons for this specific sexual victimization, needs of these victims in various stages of the criminal justice system, relationship between victimization and offending and the difficulties and problems that these victims come across in criminal justice system. The author argues that the child victims are considerably neglected and their rights are not adequately protected in the adversarial model of the criminal justice process.

Keywords: child victims of sexual violence, criminal justice system, international standards, rights of child victims, Sri Lanka

Procedia PDF Downloads 360
497 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development

Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan

Abstract:

Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.

Keywords: cost, feasibility, rural electrification, small-scale hydropower

Procedia PDF Downloads 216
496 Project HDMI: A Hybrid-Differentiated Mathematics Instruction for Grade 11 Senior High School Students at Las Piñas City Technical Vocational High School

Authors: Mary Ann Cristine R. Olgado

Abstract:

Diversity in the classroom might make it difficult to promote individualized learning, but differentiated instruction that caters to students' various learning preferences may prove to be beneficial. Hence, this study examined the effectiveness of Hybrid-Differentiated Mathematics Instruction (HDMI) in improving the students’ academic performance in Mathematics. It employed the quasi-experimental research design by using a comparative analysis of the two variables: the experimental and control groups. The learning styles of the students were identified using the Grasha-Riechmann Student Learning Style Scale (GRSLSS), which served as the basis for designing differentiated action plans in Mathematics. In addition, adapted survey questionnaires, pre-tests, and post-tests were used to gather information and were analyzed using descriptive and correlational statistics to find the relationship between variables. The experimental group received differentiated instruction for a month, while the control group received traditional teaching instruction. The study found that Hybrid-Differentiated Mathematics Instruction (HDMI) improved the academic performance of Grade 11-TVL students, with the experimental group performing better than the control group. This program has effectively tailored the teaching methods to meet the diverse learning needs of the students, fostering and enhancing a deeper understanding of mathematical concepts in Statistics & Probability, both within and beyond the classroom.

Keywords: differentiated instruction, hybrid, learning styles, academic performance

Procedia PDF Downloads 52
495 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 411
494 Pyramid of Deradicalization: Causes and Possible Solutions

Authors: Ashir Ahmed

Abstract:

Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.

Keywords: deradicalization, framework, terrorism, violent extremism

Procedia PDF Downloads 256
493 Gas Network Noncooperative Game

Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos

Abstract:

The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.

Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition

Procedia PDF Downloads 144
492 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries

Authors: Behzad Babaei, B. Gangadhara Prusty

Abstract:

The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.

Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress

Procedia PDF Downloads 93
491 Wind Velocity Climate Zonation Based on Observation Data in Indonesia Using Cluster and Principal Component Analysis

Authors: I Dewa Gede Arya Putra

Abstract:

Principal Component Analysis (PCA) is a mathematical procedure that uses orthogonal transformation techniques to change a set of data with components that may be related become components that are not related to each other. This can have an impact on clustering wind speed characteristics in Indonesia. This study uses data daily wind speed observations of the Site Meteorological Station network for 30 years. Multicollinearity tests were also performed on all of these data before doing clustering with PCA. The results show that the four main components have a total diversity of above 80% which will be used for clusters. Division of clusters using Ward's method obtained 3 types of clusters. Cluster 1 covers the central part of Sumatra Island, northern Kalimantan, northern Sulawesi, and northern Maluku with the climatological pattern of wind speed that does not have an annual cycle and a weak speed throughout the year with a low-speed ranging from 0 to 1,5 m/s². Cluster 2 covers the northern part of Sumatra Island, South Sulawesi, Bali, northern Papua with the climatological pattern conditions of wind speed that have annual cycle variations with low speeds ranging from 1 to 3 m/s². Cluster 3 covers the eastern part of Java Island, the Southeast Nusa Islands, and the southern Maluku Islands with the climatological pattern of wind speed conditions that have annual cycle variations with high speeds ranging from 1 to 4.5 m/s².

Keywords: PCA, cluster, Ward's method, wind speed

Procedia PDF Downloads 184
490 Multifluid Computational Fluid Dynamics Simulation for Sawdust Gasification inside an Industrial Scale Fluidized Bed Gasifier

Authors: Vasujeet Singh, Pruthiviraj Nemalipuri, Vivek Vitankar, Harish Chandra Das

Abstract:

For the correct prediction of thermal and hydraulic performance (bed voidage, suspension density, pressure drop, heat transfer, and combustion kinetics), one should incorporate the correct parameters in the computational fluid dynamics simulation of a fluidized bed gasifier. Scarcity of fossil fuels, and to fulfill the energy demand of the increasing population, researchers need to shift their attention to the alternative to fossil fuels. The current research work focuses on hydrodynamics behavior and gasification of sawdust inside a 2D industrial scale FBG using the Eulerian-Eulerian multifluid model. The present numerical model is validated with experimental data. Further, this model extended for the prediction of gasification characteristics of sawdust by incorporating eight heterogeneous moisture release, volatile cracking, tar cracking, tar oxidation, char combustion, CO₂ gasification, steam gasification, methanation reaction, and five homogeneous oxidation of CO, CH₄, H₂, forward and backward water gas shift (WGS) reactions. In the result section, composition of gasification products is analyzed, along with the hydrodynamics of sawdust and sand phase, heat transfer between the gas, sand and sawdust, reaction rates of different homogeneous and heterogeneous reactions is being analyzed along the height of the domain.

Keywords: devolatilization, Eulerian-Eulerian, fluidized bed gasifier, mathematical modelling, sawdust gasification

Procedia PDF Downloads 98
489 Seismic Assessment of an Existing Dual System RC Buildings in Madinah City

Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail

Abstract:

A 15-storey RC building, studied in this paper, is representative of modern building type constructed in Madina City in Saudi Arabia before 10 years ago. These buildings are almost consisting of reinforced concrete skeleton, i. e. columns, beams and flat slab as well as shear walls in the stairs and elevator areas arranged in the way to have a resistance system for lateral loads (wind–earthquake loads). In this study, the dynamic properties of the 15-storey RC building were identified using ambient motions recorded at several spatially-distributed locations within each building. After updating the mathematical models for this building with the experimental results, three dimensional pushover analysis (nonlinear static analysis) was carried out using SAP2000 software incorporating inelastic material properties for concrete, infill and steel. The effect of modeling the building with and without infill walls on the performance point as well as capacity and demand spectra due to EQ design spectrum function in Madina area has been investigated. The response modification factor (R) for the 15 storey RC building is evaluated from capacity and demand spectra (ATC-40). The purpose of this analysis is to evaluate the expected performance of structural systems by estimating, strength and deformation demands in design, and comparing these demands to available capacities at the performance levels of interest. The results are summarized and discussed.

Keywords: seismic assessment, pushover analysis, ambient vibration, modal update

Procedia PDF Downloads 384
488 Information Technology Approaches to Literature Text Analysis

Authors: Ayse Tarhan, Mustafa Ilkan, Mohammad Karimzadeh

Abstract:

Science was considered as part of philosophy in ancient Greece. By the nineteenth century, it was understood that philosophy was very inclusive and that social and human sciences such as literature, history, and psychology should be separated and perceived as an autonomous branch of science. The computer was also first seen as a tool of mathematical science. Over time, computer science has grown by encompassing every area in which technology exists, and its growth compelled the division of computer science into different disciplines, just as philosophy had been divided into different branches of science. Now there is almost no branch of science in which computers are not used. One of the newer autonomous disciplines of computer science is digital humanities, and one of the areas of digital humanities is literature. The material of literature is words, and thanks to the software tools created using computer programming languages, data that a literature researcher would need months to complete, can be achieved quickly and objectively. In this article, three different tools that literary researchers can use in their work will be introduced. These studies were created with the computer programming languages Python and R and brought to the world of literature. The purpose of introducing the aforementioned studies is to set an example for the development of special tools or programs on Ottoman language and literature in the future and to support such initiatives. The first example to be introduced is the Stylometry tool developed with the R language. The other is The Metrical Tool, which is used to measure data in poems and was developed with Python. The latest literature analysis tool in this article is Voyant Tools, which is a multifunctional and easy-to-use tool.

Keywords: DH, literature, information technologies, stylometry, the metrical tool, voyant tools

Procedia PDF Downloads 142
487 Verification of the Supercavitation Phenomena: Investigation of the Cavity Parameters and Drag Coefficients for Different Types of Cavitator

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

Supercavitation is a pressure dependent process which gives opportunity to eliminate the wetted surface effects on the underwater vehicle due to the differences of viscosity and velocity effects between liquid (freestream) and gas phase. Cavitation process occurs depending on rapid pressure drop or temperature rising in liquid phase. In this paper, pressure based cavitation is investigated due to the fact that is encountered in the underwater world, generally. Basically, this vapor-filled pressure based cavities are unstable and harmful for any underwater vehicle because these cavities (bubbles or voids) lead to intense shock waves while collapsing. On the other hand, supercavitation is a desired and stabilized phenomena than general pressure based cavitation. Supercavitation phenomena offers the idea of minimizing form drag, and thus supercavitating vehicles are revived. When proper circumstances are set up, which are either increasing the operating speed of the underwater vehicle or decreasing the pressure difference between free stream and artificial pressure, the continuity of the supercavitation is obtainable. There are 2 types of supercavitation to obtain stable and continuous supercavitation, and these are called as natural and artificial supercavitation. In order to generate natural supercavitation, various mechanical structures are discovered, which are called as cavitators. In literature, a lot of cavitator types are studied either experimentally or numerically on a CFD platforms with intent to observe natural supercavitation since the 1900s. In this paper, firstly, experimental results are obtained, and trend lines are generated based on supercavitation parameters in terms of cavitation number (), form drag coefficientC_D, dimensionless cavity diameter (d_m/d_c), and length (L_c/d_c). After that, natural cavitation verification studies are carried out for disk and cone shape cavitators. In addition, supercavitation parameters are numerically analyzed at different operating conditions, and CFD results are fitted into trend lines of experimental results. The aims of this paper are to generate one generally accepted drag coefficient equation for disk and cone cavitators at different cavitator half angle and investigation of the supercavitation parameters with respect to cavitation number. Moreover, 165 CFD analysis are performed at different cavitation numbers on FLUENT version 21R2. Five different cavitator types are modeled on SCDM with respect tocavitator’s half angles. After that, CFD database is generated depending on numerical results, and new trend lines are generated based on supercavitation parameters. These trend lines are compared with experimental results. Finally, the generally accepted drag coefficient equation and equations of supercavitation parameters are generated.

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavitating flows, supercavitation parameters, drag reduction, viscous force elimination, natural cavitation verification

Procedia PDF Downloads 124
486 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers

Authors: Rajkumar Kolangarakandy

Abstract:

Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.

Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL

Procedia PDF Downloads 329
485 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades

Authors: E. Tandis, E. Assareh

Abstract:

Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employed

Keywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine

Procedia PDF Downloads 310
484 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 177