Search results for: number of iteration
160 Grade and Maximum Tumor Dimension as Determinants of Lymphadenectomy in Patients with Endometrioid Endometrial Cancer (EEC)
Authors: Ali A. Bazzi, Ameer Hamza, Riley O’Hara, Kimberly Kado, Karen H. Hagglund, Lamia Fathallah, Robert T. Morris
Abstract:
Introduction: Endometrial Cancer is a common gynecologic malignancy primarily treated with complete surgical staging, which may include complete pelvic and para-aortic lymphadenectomy. The role of lymphadenectomy is controversial, especially the intraoperative indications for the procedure. Three factors are important in decision to proceed with lymphadenectomy: Myometrial invasion, maximum tumor dimension, and histology. Many institutions incorporate these criteria in varying degrees in the decision to proceed with lymphadenectomy. This investigation assesses the use of intraoperatively measured MTD with and without pre-operative histologic grade. Methods: This study compared retrospectively EEC patients with intraoperatively measured MTD ≤2 cm to those with MTD >2 cm from January 1, 2002 to August 31, 2017. This assessment compared those with MTD ≤ 2cm with endometrial biopsy (EB) grade 1-2 to patients with MTD > 2cm with EB grade 3. Lymph node metastasis (LNM), recurrence, and survival were compared in these groups. Results: This study reviewed 222 patient cases. In tumors > 2 cm, LNM occurred in 20% cases while in tumors ≤ 2 cm, LNM was found in 6% cases (p=0.04). Recurrence and mean survival based on last follow up visit in these two groups were not statistically different (p=0.78 and 0.36 respectively). Data demonstrated a trend that when combined with preoperative EB International Federation of Gynecology and Obstetrics (FIGO) grade, a higher proportion of patients with EB FIGO Grade 3 and MTD > 2 cm had LNM compared to those with EB FIGO Grade 1-2 and MTD ≤ 2 cm (43% vs, 11%, p=0.06). LNM was found in 15% of cases in which lymphadenectomy was performed based on current practices, whereas if the criteria of EB FIGO 3 and MTD > 2 cm were used the incidence of LNM would have been 44% cases. However, using this criterion, two patients would not have had their nodal metastases detected. Compared to the current practice, the sensitivity and specificity of the proposed criteria would be 60% and 81%, respectively. The PPV and NPV would be 43% and 90%, respectively. Conclusion: The results indicate that MTD combined with EB FIGO grade can detect LNM in a higher proportion of cases when compared to current practice. MTD combined with EB FIGO grade may eliminate the need of frozen section sampling in a substantial number of cases.
Keywords: Endometrial cancer, FIGO grade, lymphadenectomy, tumor size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853159 Using 3-Glycidoxypropyltrimethoxysilane Functionalized SiO2 Nanoparticles to Improve Flexural Properties of Glass Fibers/Epoxy Grid-Stiffened Composite Panels
Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh
Abstract:
Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures, which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components, which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silanecoupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the Fourier Transform Infrared Spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3- GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, considerable energy absorption was observed after the primary failure related to the load peak. In addition, 3- GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the gridstiffened fibrous composite structures.Keywords: Isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3011158 Result Validation Analysis of Steel Testing Machines
Authors: Wasiu O. Ajagbe, Habeeb O. Hamzat, Waris A. Adebisi
Abstract:
Structural failures occur due to a number of reasons. These may include under design, poor workmanship, substandard materials, misleading laboratory tests and lots more. Reinforcing steel bar is an important construction material, hence its properties must be accurately known before being utilized in construction. Understanding this property involves carrying out mechanical tests prior to design and during construction to ascertain correlation using steel testing machine which is usually not readily available due to the location of project. This study was conducted to determine the reliability of reinforcing steel testing machines. Reconnaissance survey was conducted to identify laboratories where yield and ultimate tensile strengths tests can be carried out. Six laboratories were identified within Ibadan and environs. However, only four were functional at the time of the study. Three steel samples were tested for yield and tensile strengths, using a steel testing machine, at each of the four laboratories (LM, LO, LP and LS). The yield and tensile strength results obtained from the laboratories were compared with the manufacturer’s specification using a reliability analysis programme. Structured questionnaire was administered to the operators in each laboratory to consider their impact on the test results. The average value of manufacturers’ tensile strength and yield strength are 673.7 N/mm2 and 559.7 N/mm2 respectively. The tensile strength obtained from the four laboratories LM, LO, LP and LS are given as 579.4, 652.7, 646.0 and 649.9 N/mm2 respectively while their yield strengths respectively are 453.3, 597.0, 550.7 and 564.7 N/mm2. Minimum tensile to yield strength ratio is 1.08 for BS 4449: 2005 and 1.15 for ASTM A615. Tensile to yield strength ratio from the four laboratories are 1.28, 1.09, 1.17 and 1.15 for LM, LO, LP and LS respectively. The tensile to yield strength ratio shows that the result obtained from all the laboratories meet the code requirements used for the test. The result of the reliability test shows varying level of reliability between the manufacturers’ specification and the result obtained from the laboratories. Three of the laboratories; LO, LS and LP have high value of reliability with the manufacturer i.e. 0.798, 0.866 and 0.712 respectively. The fourth laboratory, LM has a reliability value of 0.100. Steel test should be carried out in a laboratory using the same code in which the structural design was carried out. More emphasis should be laid on the importance of code provisions.
Keywords: Reinforcing steel bars, reliability analysis, tensile strength, universal testing machine, yield strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749157 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber
Authors: G. Zuppardi, F. Romano
Abstract:
Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697156 Biodegradation of PCP by the Rhizobacteria Isolated from Pentachlorophenol-tolerant Crop Species
Authors: Avita K. Marihal, K.S. Jagadeesh, Sarita Sinha
Abstract:
Pentachlorophenol (PCP) is a polychlorinated aromatic compound that is widespread in industrial effluents and is considered to be a serious pollutant. Among the variety of industrial effluents encountered, effluents from tanning industry are very important and have a serious pollution potential. PCP is also formed unintentionally in effluents of paper and pulp industries. It is highly persistent in soils and is lethal to a wide variety of beneficial microorganisms and insects, human beings and animals. The natural processes that breakdown toxic chemicals in the environment have become the focus of much attention to develop safe and environmentfriendly deactivation technologies. Microbes and plants are among the most important biological agents that remove and degrade waste materials to enable their recycling in the environment. The present investigation was carried out with the aim of developing a microbial system for bioremediation of PCP polluted soils. A number of plant species were evaluated for their ability to tolerate different concentrations of pentachlorophenol (PCP) in the soil. The experiment was conducted for 30 days under pot culture conditions. The toxic effect of PCP on plants was studied by monitoring seed germination, plant growth and biomass. As the concentration of PCP was increased to 50 ppm, the inhibition of seed germination, plant growth and biomass was also increased. Although PCP had a negative effect on all plant species tested, maize and groundnut showed the maximum tolerance to PCP. Other tolerating crops included wheat, safflower, sunflower, and soybean. From the rhizosphere soil of the tolerant seedlings, as many as twenty seven PCP tolerant bacteria were isolated. From soybean, 8; sunflower, 3; safflower 8; maize 2; groundnut and wheat, 3 each isolates were made. They were screened for their PCP degradation potentials. HPLC analyses of PCP degradation revealed that the isolate MAZ-2 degraded PCP completely. The isolate MAZ-1 was the next best isolate with 90 per cent PCP degradation. These strains hold promise to be used in the bioremediation of PCP polluted soils.Keywords: Biodegradation, pentachlorophenol, rhizobacteria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015155 Wasting Human and Computer Resources
Authors: Mária Csernoch, Piroska Biró
Abstract:
The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.
Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911154 Identification of the Antimicrobial Effect of Liquorice Extracts on Gram-Positive Bacteria: Determination of Minimum Inhibitory Concentration and Mechanism of Action Using a luxABCDE Reporter Strain
Authors: Madiha El Awamie, Catherine Rees
Abstract:
Natural preservatives have been used as alternatives to traditional chemical preservatives; however, a limited number have been commercially developed and many remain to be investigated as sources of safer and effective antimicrobials. In this study, we have been investigating the antimicrobial activity of an extract of Glycyrrhiza glabra (liquorice) that was provided as a waste material from the production of liquorice flavourings for the food industry, and to investigate if this retained the expected antimicrobial activity so it could be used as a natural preservative. Antibacterial activity of liquorice extract was screened for evidence of growth inhibition against eight species of Gram-negative and Gram-positive bacteria, including Listeria monocytogenes, Listeria innocua, Staphylococcus aureus, Enterococcus faecalis and Bacillus subtilis. The Gram-negative bacteria tested include Pseudomonas aeruginosa, Escherichia coli and Salmonella typhimurium but none of these were affected by the extract. In contrast, for all of the Gram-positive bacteria tested, growth was inhibited as monitored using optical density. However parallel studies using viable count indicated that the cells were not killed meaning that the extract was bacteriostatic rather than bacteriocidal. The Minimum Inhibitory Concentration [MIC] and Minimum Bactericidal Concentration [MBC] of the extract was also determined and a concentration of 50 µg ml-1 was found to have a strong bacteriostatic effect on Gram-positive bacteria. Microscopic analysis indicated that there were changes in cell shape suggesting the cell wall was affected. In addition, the use of a reporter strain of Listeria transformed with the bioluminescence genes luxABCDE indicated that cell energy levels were reduced when treated with either 12.5 or 50 µg ml-1 of the extract, with the reduction in light output being proportional to the concentration of the extract used. Together these results suggest that the extract is inhibiting the growth of Gram-positive bacteria only by damaging the cell wall and/or membrane.
Keywords: Antibacterial activity, bioluminescence, Glycyrrhiza glabra, natural preservative.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682153 Influence of Online Sports Events on Betting among Nigerian Youth
Authors: B. O. Diyaolu
Abstract:
The opportunity provided by advances in technology as regard to sports betting is so numerous. Nigerian youth are not left out especially with the use of phones and visit to sports betting outlets. Today, it is more difficult to differentiate a true fan as there are quite a number of them that became fans as a result of betting on live games. This study investigated the influence of online sports events on betting among Nigerian youth. A descriptive survey research design was used and the population consists of all Nigerian youth that engages in betting and lives within the southwest zone of Nigeria. A simple random sampling technique was used to pick three states from the southwest zone of Nigeria. 2500 respondents comprising of males and females were sampled from the three states. A structured questionnaire on Online Sports Event Contribution to Sports Betting (OSECSB) was used. The instrument consists of three sections. Section A seeks information on the demographic data of the respondents. Section B seeks information on online sports events while section C was used to extract information on sports betting. The modified instrument which consists of 14 items has a reliability coefficient of 0.74. The hypothesis was tested at 0.05 significance level. The completed questionnaire was collated, coded, and analyzed using descriptive statistics of frequency counts, percentage and pie chart, and inferential statistics of multiple regressions. The findings of this study revealed that online sports betting is a significant predictor of an increase in sports betting among Nigerian youth. The media and television, as well as globalization and the internet, coupled with social media and various online platforms, have all contributed to the immense increase in sports betting. The increase in the advertisement of the betting platform during live matches, especially football, is becoming more alarming. In most organized international events, the media attention, as well as sponsorship rights, are now been given to one or two betting platforms. There is a need for all stakeholders to put in place school-based intervention programs to reorientate our youth about the consequences of addiction to betting. Such programs must include meta-analyses and emotional control towards sports betting.
Keywords: Betting platform, Nigerian fans, Nigerian youth, sports betting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 446152 Ideal Disinfectant Characteristics According Data in Published Literature
Authors: Saimir Heta, Ilma Robo, Rialda Xhizdari, Kers Kapaj
Abstract:
The stability of an ideal disinfectant should be constant regardless of the change in the atmospheric conditions of the environment where it is kept. If the conditions such as temperature or humidity change, it is understood that it will also be necessary to approach possible changes in the holding materials such as plastic or glass bottles with the aim of protecting the disinfectant, for example, from the excessive lighting of the environment, which can also be translated as an increase in the temperature of disinfectant as a fluid. In this study, an attempt was made to find the most recent published data about the best possible combination of disinfectants indicated for use after dental procedures. This purpose of the study was realized by comparing the basic literature that is studied in the field of dentistry by students with the most published data in the literature of recent years about this topic. Each disinfectant is represented by a number called the disinfectant count, in which different factors can influence the increase or reduction of variables whose production remains a specific statistic for a specific disinfectant. The changes in the atmospheric conditions where the disinfectant is deposited and stored in the environment are known to affect the stability of the disinfectant as a fluid; this fact is known and even cited in the leaflets accompanying the manufactured boxes of disinfectants. It is these cares, in the form of advice, which are based not only on the preservation of the disinfectant but also on the application in order to have the desired clinical result. Aldehydes have the highest constant among the types of disinfectants, followed by acids. The lowest value of the constant belongs to the class of glycols, the predecessors of which were the halogens, in which class there are some representatives with disinfection applications. The class of phenols and acids have almost the same intervals of constants. If the goal were to find the ideal disinfectant among the large variety of disinfectants produced, a good starting point would be to find something unchanging or a fixed, unchanging element on the basis of which the comparison can be made properties of different disinfectants. Precisely based on the results of this study, the role of the specific constant according to the specific disinfectant is highlighted. Finding an ideal disinfectant, like finding a medication or the ideal antibiotic, is an ongoing but unattainable goal.
Keywords: Different disinfectants, phenols, aldehydes, specific constant, dental procedures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43151 Microbiological Profile of UTI along with Their Antibiotic Sensitivity Pattern with Special Reference to Nitrofurantoin
Authors: Rupinder Bakshi, Geeta Walia, Anita Gupta
Abstract:
Urinary Tract Infections are considered as one of the most common bacterial infections with an estimated annual global incidence of 150 million. Antimicrobial drug resistance is one of the major threats due to wide spread usage of uncontrolled antibiotics. In this study, a total number of 9149 urine samples were collected from R.H Patiala and processed in the Department of Microbiology G. M. C Patiala (January 2013 to December 2013). Urine samples were inoculated on MacConkey’s and blood agar plates and incubated at 370C for 24 hrs. The organisms were identified by colony characters, Gram’s staining, and biochemical reactions. Antimicrobial susceptibility of the isolates was determined against various antimicrobial agents (Hi – Media Mumbai India) by Kirby Bauer DISK diffusion method on Muller Hinton agar plates. Maximum patients were in the age group of 21-30 yrs followed by 31-40 yrs. Males (34%) are less prone to urinary tract infections than females (66%). Culture was positive in 25% of the samples. Escherichia coli was the most common isolate 60.3% followed by Klebsiella pneumoniae 13.5%, Proteus spp. 9% and Staphylococcus aureus 7.6%. Most of the urinary isolates were sensitive to, carbepenems, Aztreonam, Amikacin, and Piperacillin + Tazobactum. All the isolates showed a good sensitivity towards Nitrofurantoin (82%). ESBL production was found to be 70.6% in Escherichia coli and 29.4% in Klebsiella pneumonia. Susceptibility of ESBL producers to Imipenem, Nitrofurantoin and Amikacin were found to be 100%, 76%, and 75% respectively. Uropathogens are increasingly showing resistance to many antibiotics making empiric management of outpatient UTIs challenging. Ampicillin, Cotrimoxazole and Ciprofloxacin should not be used in empiric treatment. Nitrofurantoin could be used in lower urinary tract infection. Knowledge of uropathogens and their antimicrobial susceptibility pattern in a geographical region will help in appropriate and judicious antibiotic usage in a health care setup.Keywords: Urinary Tract Infection, UTI, antibiotic susceptibility pattern, ESBL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619150 Assessment of Path Loss Prediction Models for Wireless Propagation Channels at L-Band Frequency over Different Micro-Cellular Environments of Ekiti State, Southwestern Nigeria
Authors: C. I. Abiodun, S. O. Azi, J. S. Ojo, P. Akinyemi
Abstract:
The design of accurate and reliable mobile communication systems depends majorly on the suitability of path loss prediction methods and the adaptability of the methods to various environments of interest. In this research, the results of the adaptability of radio channel behavior are presented based on practical measurements carried out in the 1800 MHz frequency band. The measurements are carried out in typical urban, suburban and rural environments in Ekiti State, Southwestern part of Nigeria. A total number of seven base stations of MTN GSM service located in the studied environments were monitored. Path loss and break point distances were deduced from the measured received signal strength (RSS) and a practical path loss model is proposed based on the deduced break point distances. The proposed two slope model, regression line and four existing path loss models were compared with the measured path loss values. The standard deviations of each model with respect to the measured path loss were estimated for each base station. The proposed model and regression line exhibited lowest standard deviations followed by the Cost231-Hata model when compared with the Erceg Ericsson and SUI models. Generally, the proposed two-slope model shows closest agreement with the measured values with a mean error values of 2 to 6 dB. These results show that, either the proposed two slope model or Cost 231-Hata model may be used to predict path loss values in mobile micro cell coverage in the well-considered environments. Information from this work will be useful for link design of microwave band wireless access systems in the region.
Keywords: Break-point distances, path loss models, path loss exponent, received signal strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819149 Traffic Congestion on Highways in Nigeria Causes, Effects and Remedies
Authors: Popoola M. O., Abiola S. O., Adeniji W. A.
Abstract:
This study investigates the causes, effects and remedies of traffic congestion which has become a common sight in most highways in Nigeria; Mowe/Ibafo section of the Lagos-Ibadan expressway was used as the case-study. 300 Structured questionnaires were distributed among the road users comprising drivers (Private and Commercial), passengers, pedestrians, traffic officers, church congregations, community leaders, Mowe/Ibafo residents, and other users of the road.
300 questionnaires were given out; the average of 276 well completed returned questionnaires formed the basis of the study and was analyzed by the Relative Importance Index (R.I.I.). The result from the study showed the causes of traffic congestion as inadequate road capacity, poor road pavement, poor traffic management, poor drainage system poor driving habit, poor parking habit, poor design junctions/round-about, presence of heavy trucks, lack of pedestrian facilities, lack of road furniture, lack of parking facilities and others. Effects of road congestion from the study are waste of time, delay movement, stress, accident, inability to forecast travel of time, fuel consumption, road rage, relocation, night driving, and environmental pollution. To drastically reduce these negative effects; there must be provision for adequate parking space, construction of proper drainage, enlarging the width of the road, rehabilitate all roads needing attention, public enlightenment, traffic education, hack down all illegal buildings/shops built on the right of way (ROW), create a separate/alternative root for trucks and heavy vehicles, provision of pedestrian facilities, In-depth training of transport/traffic personnel, ban all form of road trading/hawking, and reduce the number of bus-stop where necessary. It is hoped that this study will become the foundation of further research in the area of improve road traffic management on our major highway.
Keywords: Highways, Congestion, Traffic, Traffic congestion, traffic management, Nigeria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12428148 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810147 A Case Study on Vocational Teachers’ Perceptions on Their Linguistically and Culturally Responsive Teaching
Authors: Kirsi Korkealehto
Abstract:
In Finland the transformation from homogenous culture into multicultural one as a result of heavy immigration has been rapid in the recent decades. As multilingualism and multiculturalism are growing features in our society, teachers in all educational levels need to be competent for encounters with students from diverse cultural backgrounds. Consequently, also the number of multicultural and multilingual vocational school students has increased which has not been taken into consideration in teacher education enough. To bridge this gap between teachers’ competences and the requirements of the contemporary school world, Finnish Ministry of Culture and Education established the DivEd-project. The aim of the project is to prepare all teachers to work in the linguistically and culturally diverse world they live in, to develop and increase culturally sustaining and linguistically responsive pedagogy in Finland, increase awareness among Teacher Educators working with preservice teachers and to increase awareness and provide specific strategies to in-service teachers. The partners in the nationwide project are 6 universities and 2 universities of applied sciences. In this research, the linguistically and culturally sustainable teaching practices developed within the DivEd-project are tested in practice. This research aims to explore vocational teachers’ perceptions of these multilingualism and multilingual educational practices. The participants of this study are vocational teachers in of different fields. The data were collected by individual, face-to-face interviews. The data analysis was conducted through content analysis. The findings indicate that the vocational teachers experience that they lack knowledge on linguistically and culturally responsive pedagogy. Moreover, they regard themselves in some extent incompetent in incorporating multilingually and multiculturally sustainable pedagogy in everyday teaching work. Therefore, they feel they need more training pertaining multicultural and multilingual knowledge, competences and suitable pedagogical methods for teaching students from diverse linguistic and cultural backgrounds.Keywords: Multicultural, multilingual, teacher competences, vocational school.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510146 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.
Keywords: Dependency, story-cost, cost modes, engineering demand parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018145 The Importance of Changing the Traditional Mode of Higher Education in Bangladesh: Creating Huge Job Opportunities for Home and Abroad
Authors: M. M. Shahidul Hassan, Omiya Hassan
Abstract:
Bangladesh has set its goal to reach upper middle-income country status by 2024. To attain this status, the country must satisfy the World Bank requirement of achieving minimum Gross National Income (GNI). Number of youth job seekers in the country is increasing. University graduates are looking for decent jobs. So, the vital issue of this country is to understand how the GNI and jobs can be increased. The objective of this paper is to address these issues and find ways to create more job opportunities for youths at home and abroad which will increase the country’s GNI. The paper studies proportion of different goods Bangladesh exported, and also the percentage of employment in different sectors. The data used here for the purpose of analysis have been collected from the available literature. These data are then plotted and analyzed. Through these studies, it is concluded that growth in sectors like agricultural, ready-made garments (RMG), jute industries and fisheries are declining and the business community is not interested in setting up capital-intensive industries. Under this situation, the country needs to explore other business opportunities for a higher economic growth rate. Knowledge can substitute the physical resource. Since the country consists of the large youth population, higher education will play a key role in economic development. It now needs graduates with higher-order skills with innovative quality. Such dispositions demand changes in a university’s curriculum, teaching and assessment method which will function young generations as active learners and creators. By bringing these changes in higher education, a knowledge-based society can be created. The application of such knowledge and creativity will then become the commodity of Bangladesh which will help to reach its goal as an upper middle-income country.
Keywords: Bangladesh, economic sectors, economic growth, higher education, knowledge-based economy, massifcation of higher education, teaching and learning, universities’ role in society.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979144 The Role of Knowledge Management in Innovation: Spanish Evidence
Authors: María Jesús Luengo-Valderrey, Mónica Moso-Díez
Abstract:
In the knowledge-based economy, innovation is considered essential in order to achieve survival and growth in organizations. On the other hand, knowledge management is currently understood as one of the keys to innovation process. Both factors are generally admitted as generators of competitive advantage in organizations. Specifically, activities on R&D&I and those that generate internal knowledge have a positive influence in innovation results. This paper examines this effect and if it is similar or not is what we aimed to quantify in this paper. We focus on the impact that proportion of knowledge workers, the R&D&I investment, the amounts destined for ICTs and training for innovation have on the variation of tangible and intangibles returns for the sector of high and medium technology in Spain. To do this, we have performed an empirical analysis on the results of questionnaires about innovation in enterprises in Spain, collected by the National Statistics Institute. First, using clusters methodology, the behavior of these enterprises regarding knowledge management is identified. Then, using SEM methodology, we performed, for each cluster, the study about cause-effect relationships among constructs defined through variables, setting its type and quantification. The cluster analysis results in four groups in which cluster number 1 and 3 presents the best performance in innovation with differentiating nuances among them, while clusters 2 and 4 obtained divergent results to a similar innovative effort. However, the results of SEM analysis for each cluster show that, in all cases, knowledge workers are those that affect innovation performance most, regardless of the level of investment, and that there is a strong correlation between knowledge workers and investment in knowledge generation. The main findings reached is that Spanish high and medium technology companies improve their innovation performance investing in internal knowledge generation measures, specially, in terms of R&D activities, and underinvest in external ones. This, and the strong correlation between knowledge workers and the set of activities that promote the knowledge generation, should be taken into account by managers of companies, when making decisions about their investments for innovation, since they are key for improving their opportunities in the global market.
Keywords: High and medium technology sector, innovation, knowledge management, Spanish companies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2198143 Designing Creative Events with Deconstructivism Approach
Authors: Maryam Memarian, Mahmood Naghizadeh
Abstract:
Deconstruction is an approach that is entirely incompatible with the traditional prevalent architecture. Considering the fact that this approach attempts to put architecture in sharp contrast with its opposite events and transpires with attending to the neglected and missing aspects of architecture and deconstructing its stable structures. It also recklessly proceeds beyond the existing frameworks and intends to create a different and more efficient prospect for space. The aim of deconstruction architecture is to satisfy both the prospective and retrospective visions as well as takes into account all tastes of the present in order to transcend time. Likewise, it ventures to fragment the facts and symbols of the past and extract new concepts from within their heart, which coincide with today’s circumstances. Since this approach is an attempt to surpass the limits of the prevalent architecture, it can be employed to design places in which creative events occur and imagination and ambition flourish. Thought-provoking artistic events can grow and mature in such places and be represented in the best way possible to all people. The concept of event proposed in the plan grows out of the interaction between space and creation. In addition to triggering surprise and high impressions, it is also considered as a bold journey into the suspended realms of the traditional conflicts in architecture such as architecture-landscape, interior-exterior, center-margin, product-process, and stability-instability. In this project, at first, through interpretive-historical research method and examining the inputs and data collection, recognition and organizing takes place. After evaluating the obtained data using deductive reasoning, the data is eventually interpreted. Given the fact that the research topic is in its infancy and there is not a similar case in Iran with limited number of corresponding instances across the world, the selected topic helps to shed lights on the unrevealed and neglected parts in architecture. Similarly, criticizing, investigating and comparing specific and highly prized cases in other countries with the project under study can serve as an introduction into this architecture style.
Keywords: Creativity, deconstruction, event.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901142 Enhanced Disk-Based Databases Towards Improved Hybrid In-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable inmemory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of diskbased database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of inmemory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.
Keywords: Concurrency control, disk-based databases, inmemory systems, enhanced memory access (EMA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038141 Comparison of Detached Eddy Simulations with Turbulence Modeling
Authors: Muhammad Amjad Sohail, Prof. Yan Chao, Mukkarum Husain
Abstract:
Flow field around hypersonic vehicles is very complex and difficult to simulate. The boundary layers are squeezed between shock layer and body surface. Resolution of boundary layer, shock wave and turbulent regions where the flow field has high values is difficult of capture. Detached eddy simulation (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid body boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore the grid resolution is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. In this research study hypersonic flow is simulated at Mach 8 and different angle of attacks to resolve the proper boundary layers and discontinuities. The flow is also simulated in the long wake regions. Mesh is little different than RANS simulations and it is made dense near the boundary layers and in the wake regions to resolve it properly. Hypersonic blunt cone cylinder body with frustrum at angle 5o and 10 o are simulated and there aerodynamics study is performed to calculate aerodynamics characteristics of different geometries. The results and then compared with experimental as well as with some turbulence model (SA Model). The results achieved with DES simulation have very good resolution as well as have excellent agreement with experimental and available data. Unsteady simulations are performed for DES calculations by using duel time stepping method or implicit time stepping. The simulations are performed at Mach number 8 and angle of attack from 0o to 10o for all these cases. The results and resolutions for DES model found much better than SA turbulence model.Keywords: Detached eddy simulation, dual time stepping, hypersonic flow, turbulence modeling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349140 An Anthropometric Index Capable of Differentiating Morbid Obesity from Obesity and Metabolic Syndrome in Children
Authors: Mustafa M. Donma
Abstract:
Circumference measurements may give meaningful information about the varying stages of obesity. Some formulas may be derived from a number of body circumference measurements to estimate body fat. Waist (WC), hip (HC) and neck (NC) circumferences are currently the most frequently used measurements. The aim of this study was to develop a formula derived from these three anthropometric measurements for the differential diagnosis of morbid obesity with and without metabolic syndrome (MetS), MOMetS+ and MOMetS-, respectively. 187 children were recruited from the pediatrics outpatient clinic of Tekirdag Namik Kemal University, Faculty of Medicine. Signed informed consent forms were taken from the participants. The study was carried out according to the Helsinki Declaration. The study protocol was approved by the institutional non-interventional ethics committee of Tekirdag Namik Kemal University Medical Faculty. The study population was divided into four groups as normal-body mass index (N-BMI) (n = 35), obese (OB) (n = 44), morbid obese (MO) (n = 75) and MetS (n = 33). Age- and gender-adjusted BMI percentile values were used for the classification of groups. The children in MetS group were selected based upon the nature of the MetS components described as MetS criteria. Anthropometric measurements, laboratory analysis and statistical evaluation confined to study population were performed. BMI values were calculated. A circumference index, advanced Donma circumference index (ADCI) was presented as WC*HC/NC. The statistical significance degree was chosen as p < 0.05. BMI values were 17.7 ± 2.8, 24.5 ± 3.3, 28.8 ± 5.7, 31.4 ± 8.0 kg/m2, for N-BMI, OB, MO, MetS groups (p = 0.001), respectively. An increasing trend from N-BMI to MetS was observed. However, the increase in MetS group compared to MO group was not significant. For the new index, significant differences were obtained between N-BMI and OB, MO, MetS groups (p = 0.001). A significant difference between MO and MetS groups was detected (p = 0.043). A significant correlation was found between BMI and ADCI. In conclusion, in spite of the strong correlation between BMI and ADCI values obtained when all groups were considered, ADCI, but not BMI, was the index, which was capable of differentiating cases with morbid obesity from cases with morbid obesity and MetS.
Keywords: Anthropometry, body mass index, childhood obesity, body circumference, metabolic syndrome.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63139 Snails and Fish as Pollution Biomarkers in Lake Manzala and Laboratory C: Laboratory Exposed Snails to Chemical Mixtures
Authors: Hanaa M. M. El-Khayat, Hoda Abdel-Hamid, Kadria M. A. Mahmoud, Hanan S. Gaber, Hoda, M. A. Abu Taleb, Hassan E. Flefel
Abstract:
Snails are considered as suitable diagnostic organisms for heavy metal–contaminated sites. Biomphalaria alexandrina snails are used in this work as pollution bioindicators after exposure to chemical mixtures consisted of heavy metals (HM); zinc (Zn), copper (Cu) and lead (Pb); and persistent organic pollutants; Decabromodiphenyl ether 98% (D) and Aroclor 1254 (A). The impacts of these tested chemicals, individual and mixtures, on liver and kidney functions, antioxidant enzymes, complete blood picture, and tissue histology were studied. Results showed that Cu was proved to be the highly toxic against snails than Zn and Pb where LC50 values were 1.362, 213.198 and 277.396 ppm, respectively. Also, B. alexandrina snails exposed to the mixture of HM (¼ LC5 Cu, Pb and Zn) showed the highest bioaccumulation of Cu and Zn in their whole tissue, the most significant increase in AST, ALT & ALP activities and the highest significant levels of total protein, albumin and globulin. Results showed significant alterations in CAT activity in snail tissue extracts while snail samples exposed to most experimental tests showed significant increase in GST activity. Snail samples that exposed to HM mixtures showed a significant decrease in total hemocytes count while snail samples that exposed to mixtures containing A & D showed a significant increase in total hemocytes and Hyalinocytes. Histopathological alterations in snail samples exposed to individual HM and their mixtures for 4 weeks showed degeneration, edema, hyper trophy and vaculation in head-foot muscle, degeneration and necrotic changes in the digestive gland and accumulation in most tested organs. Also, the hermaphrodite gland showed mature ova with irregular shape and reduction in sperm number. In conclusion, the resulted damage and alterations in B. alexandrina studied parameters can be used as bioindicators to the presence of pollutants in its habitats.Keywords: Biomphalaria, Zn, Cu, Pb, AST, ALT, ALP, total protein albumin, globulin, CAT and Histopathology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1244138 Effects of Supplementation with Annatto (Bixa orellana)-Derived δ-Tocotrienol on the Nicotine-Induced Reduction in Body Weight and 8-Cell Preimplantation Embryonic Development in Mice
Authors: M. H. Rajikin, S. M. M. Syairah, A. R. Sharaniza
Abstract:
Effects of nicotine on pre-partum body weight and preimplantation embryonic development has been reported previously. Present study was conducted to determine the effects of annatto (Bixa orellana)-derived delta-tocotrienol (TCT) (with presence of 10% gamma-TCT isomer) on the nicotine-induced reduction in body weight and 8-cell embryonic growth in mice. Twenty-four 6-8 weeks old (23-25g) female balb/c mice were randomly divided into four groups (G1-G4; n=6). Those groups were subjected to the following treatments for 7 consecutive days: G1 (control) were gavaged with 0.1 ml tocopherol stripped corn oil. G2 was subcutaneously (s.c.) injected with 3 mg/kg/day of nicotine. G3 received concurrent treatment of nicotine (3 mg/kg/day) and 60 mg/kg/day of δ-TCT mixture (contains 90% delta & 10% gamma isomers) and G4 was given 60 mg/kg/day of δ-TCT mixture alone. Body weights were recorded daily during the treatment. On Day 8, females were superovulated with 5 IU Pregnant Mare’s Serum Gonadotropin (PMSG) for 48 hours followed with 5 IU human Chorionic Gonadotropin (hCG) before mated with males at the ratio of 1:1. Females were sacrificed by cervical dislocation for embryo collection 48 hours post-coitum. Collected embryos were cultured in vitro. Results showed that throughout Day 1 to Day 7, the body weight of nicotine treated group (G2) was significantly lower (p<0.05) than that of G1, G3 and G4. Intervention with δ-TCT mixture (G3) managed to increase the body weight close to the control group. This is also observed in the group treated with δ-TCT mixture alone (G4). The development of 8-cell embryos following in vitro culture (IVC) was totally inhibited in G2. Intervention with δ- TCT mixture (G3) resulted in the production of 8-cell embryos, although it was not up to that of the control group. Treatment with δ- TCT mixture alone (G4) caused significant increase in the average number of produced 8-cell embryo compared to G1. Present data indicated that δ-TCT mixture was able to reverse the body weight loss in nicotine treated mice and the development of 8-cell embryos was also improved. Further analysis on the quality of embryos need to done to confirm the effects of δ-TCT mixture on preimplantation embryos.Keywords: δ-tocotrienol, body weight, nicotine, preimplantation embryonic development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4140137 Phelipanche ramosa (L. - Pomel) Control in Field Tomato Crop
Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L., Carriero F., Cibelli F., Raimondo M. L., Tarantino E.
Abstract:
The tomato is a very important crop, whose cultivation in the Mediterranean basin is severely affected by the phytoparasitic weed Phelipanche ramosa. The semiarid regions of the world are considered the main areas where this parasitic weed is established causing heavy infestation as it is able to produce high numbers of seeds (up to 500,000 per plant), which remain viable for extended period (more than 20 years). In this paper the results obtained from eleven treatments in order to control this parasitic weed including chemical, agronomic, biological and biotechnological methods compared with the untreated test under two plowing depths (30 and 50 cm) are reported. The split-plot design with 3 replicates was adopted. In 2014 a trial was performed in Foggia province (southern Italy) on processing tomato (cv Docet) grown in the field infested by Phelipanche ramosa. Tomato seedlings were transplant on May 5, on a clay-loam soil. During the growing cycle of the tomato crop, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each plot was detected. At tomato harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However, among the different methods tested, some of them which Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) under deeper plowing (50 cm depth) proved to mitigate the virulence of the Phelipanche ramose attacks. It is assumed that these effects can be improved combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.
Keywords: Control methods, Phelipanche ramosa, tomato crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545136 Ethically Integrating Robots in Elder Care
Authors: Suresh Lokiah, Samarth Suresh, Yashaswini Vismaya, Sudha Jamthe
Abstract:
The emerging trend of integrating robots into elderly care, particularly for assisting patients with dementia, holds the potential to greatly transform the sector. Assisted living facilities, which house a significant number of elderly individuals and dementia patients, constantly strive to engage their residents in stimulating activities. However, due to staffing shortages, they often rely on volunteers to introduce new activities. Despite the availability of social interaction, the residents are in desperate need of additional support. Robots designed for elder care are categorized based on their design and functionality. These categories include Companion Robots, Telepresence Robots, Health Monitoring Robots, and Rehab Robots. However, the integration of such robots raises significant ethical concerns, notably regarding privacy, autonomy, and the risk of dehumanization. Privacy issues arise when robots need to continually monitor patient activities. There is also a risk of patients becoming overly dependent on these robots, potentially undermining patients’ autonomy. Furthermore, the replacement of human touch with robotic interaction can lead to the dehumanization of care. This positional paper delves into the ethical considerations of incorporating robotic assistance in eldercare. It proposes a series of guidelines and strategies to ensure the ethical deployment of these robots. These guidelines suggest involving patients in the design and development process of robots and emphasize the critical need for human oversight to respect the dignity and rights of elderly and dementia patients. The paper also recommends implementing robust privacy measures, including secure data transmission and data anonymization. In conclusion, this paper offers a thorough examination of the ethical implications of using robotic assistance in elder care. It provides a strategic roadmap to ensure this technology is utilized ethically, thereby maximizing its potential benefits and minimizing any potential harm.
Keywords: Robots for eldercare, ethics, human-robot interaction, assisted living.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34135 Ethno-Botanical Diversity and Conservation Status of Medicinal Flora at High Terrains of Garhwal (Uttarakhand) Himalaya, India: A Case Study in Context to Multifarious Tourism Growth and Peri-Urban Encroachments
Authors: Aravind Kumar
Abstract:
The high terrains of Garhwal (Uttarakhand) Himalaya are the niches of a number of rare and endemic plant species of great therapeutic importance. However, the wild flora of the area is still under a constant threat due to rapid upsurge in human interferences, especially through multifarious tourism growth and peri-urban encroachments. After getting the status of a ‘Special State’ of the country since its inception in the year 2000, this newly borne State led to very rapid infrastructural growth and development. Consequently, its townships started expanding in an unmanaged way grabbing nearby agricultural lands and forest areas into peri-urban landscapes. Simultaneously, a boom in tourism and pilgrimage in the state and the infrastructural facilities raised by the government for tourists/pilgrims are destroying its biodiversity. Field survey revealed 242 plant species of therapeutic significance naturally growing in the area and being utilized by local inhabitants as traditional medicines. On conservation scale, 6 species (2.2%) were identified as critically endangered, 19 species (7.1%) as the endangered ones, 8 species (3.0%) under rare category, 17 species (6.4%) as threatened and 14 species (5.2%) as vulnerable. The Government of India has brought mega-biodiversity hot spots of the state under Biosphere Reserve, National Parks, etc. restricting all kinds of human interferences; however, the two most sacred shrines of Hindus and Sikhs viz. Shri Badrinath and Shri Hemkunt Sahib, and two great touristic attractions viz. Valley of Flowers and Auli-Joshimath Skiing Track oblige the government to maintain equilibrium between entries of visitors vis-à-vis biodiversity conservation in high terrains of Uttarakhand Himalaya.
Keywords: Biodiversity conservation, ethno-botany, Garhwal (Uttarakhand) Himalaya, peri-urban encroachment, pilgrimage and tourism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536134 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches
Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez
Abstract:
Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.
Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414133 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market
Authors: Cristian Păuna
Abstract:
In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.
Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696132 Understanding the Notion between Resiliency and Recovery through a Spatial-Temporal Analysis of Section 404 Wetland Alteration Permits before and after Hurricane Ike
Authors: Md Y. Reja, Samuel D. Brody, Wesley E. Highfield, Galen D. Newman
Abstract:
Historically, wetlands in the United States have been lost due to agriculture, anthropogenic activities, and rapid urbanization along the coast. Such losses of wetlands have resulted in high flooding risk for coastal communities over the period of time. In addition, alteration of wetlands via the Section 404 Clean Water Act permits can increase the flooding risk to future hurricane events, as the cumulative impact of this program is poorly understood and under-accounted. Further, recovery after hurricane events is acting as an encouragement for new development and reconstruction activities by converting wetlands under the wetland alteration permitting program. This study investigates the degree to which hurricane recovery activities in coastal communities are undermining the ability of these places to absorb the impacts of future storm events. Specifically, this work explores how and to what extent wetlands are being affected by the federal permitting program post-Hurricane Ike in 2008. Wetland alteration patterns are examined across three counties (Harris, Galveston, and Chambers County) along the Texas Gulf Coast over a 10-year time period, from 2004-2013 (five years before and after Hurricane Ike) by conducting descriptive spatial analyses. Results indicate that after Hurricane Ike, the number of permits substantially increased in Harris and Chambers County. The vast majority of individual and nationwide type permits were issued within the 100-year floodplain, storm surge zones, and areas damaged by Ike flooding, suggesting that recovery after the hurricane is compromising the ecological resiliency on which coastal communities depend. The authors expect that the findings of this study can increase awareness to policy makers and hazard mitigation planners regarding how to manage wetlands during a long-term recovery process to maintain their natural functions for future flood mitigation.
Keywords: Ecological resiliency, Hurricane Ike, recovery, Section 404 permitting, wetland alteration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861131 Acute and Chronic Effect of Biopesticide on Infestation of Whitefly Bemisia tabaci (Gennadius) on the Culantro Cultivation
Authors: U. Pangnakorn, S. Chuenchooklin
Abstract:
Acute and chronic effects of biopesticide from entomopathogenic nematode (Steinernema thailandensis n. sp.), bacteria ISR (Pseudomonas fluorescens), wood vinegar and fermented organic substances from plants: (neem Azadirachta indica + citronella grass Cymbopogon nardus Rendle + bitter bush Chromolaena odorata L.) were tested on culantro (Eryngium foetidum L.). The biopesticide was investigated for infestation reduction of the major insect pest whitefly (Bemisia tabaci (Gennadius)). The experimental plots were located at a farm in Nakhon Sawan Province, Thailand. This study was undertaken during the drought season (late November to May). Effectiveness of the treatment was evaluated in terms of acute and chronic effect. The populations of whitefly were observed and recorded every hour up to 3 hours with insect nets and yellow sticky traps after the treatments were applied for the acute effect. The results showed that bacteria ISR had the highest effectiveness for controlling whitefly infestation on culantro; the whitefly numbers on insect nets were 12.5, 10.0 and 7.5 after 1 hr, 2 hr, and 3 hr, respectively while the whitefly on yellow sticky traps showed 15.0, 10.0 and 10.0 after 1 hr, 2 hr, and 3 hr, respectively. For chronic effect, the whitefly was continuously collected and recorded at weekly intervals; the result showed that treatment of bacteria ISR found the average whitefly numbers only 8.06 and 11.0 on insect nets and sticky traps respectively, followed by treatment of nematode where the average whitefly was 9.87 and 11.43 on the insect nets and sticky traps, respectively. In addition, the minor insect pests were also observed and collected. The biopesticide influenced the reduction number of minor insect pests (red spider mites, beet armyworm, short-horned grasshopper, pygmy locusts, etc.) with only a few found on the culantro cultivation.Keywords: Whitefly (Bemisia tabaci Gennadius), Culantro (Eryngium foetidum L.), Entomopathogenic nematode (Steinernema thailandensis n. sp.), Bacteria ISR (Pseudomonas fluorescens), wood vinegar, fermented organic substances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330