Search results for: energy performance gap
10503 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection
Authors: Devadrita Dey Sarkar
Abstract:
Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)
Procedia PDF Downloads 45510502 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating
Abstract:
The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics
Procedia PDF Downloads 19210501 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 35910500 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 13210499 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures
Authors: Diyar Yousif Ali
Abstract:
Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing
Procedia PDF Downloads 8810498 The Analysis of Drill Bit Optimization by the Application of New Electric Impulse Technology in Shallow Water Absheron Peninsula
Authors: Ayshan Gurbanova
Abstract:
Despite based on the fact that drill bit which is the smallest part of bottom hole assembly costs only in between 10% and 15% of the total expenses made, they are the first equipment that is in contact with the formation itself. Hence, it is consequential to choose the appropriate type and dimension of drilling bit, which will prevent majority of problems by not demanding many tripping procedure. However, within the advance in technology, it is now seamless to be beneficial in the terms of many concepts such as subsequent time of operation, energy, expenditure, power and so forth. With the intention of applying the method to Azerbaijan, the field of Shallow Water Absheron Peninsula has been suggested, where the mainland has been located 15 km away from the wildcat wells, named as “NKX01”. It has the water depth of 22 m as indicated. In 2015 and 2016, the seismic survey analysis of 2D and 3D have been conducted in contract area as well as onshore shallow water depth locations. With the aim of indicating clear elucidation, soil stability, possible submersible dangerous scenarios, geohazards and bathymetry surveys have been carried out as well. Within the seismic analysis results, the exact location of exploration wells have been determined and along with this, the correct measurement decisions have been made to divide the land into three productive zones. In the term of the method, Electric Impulse Technology (EIT) is based on discharge energies of electricity within the corrosivity in rock. Take it simply, the highest value of voltages could be created in the less range of nano time, where it is sent to the rock through electrodes’ baring as demonstrated below. These electrodes- higher voltage powered and grounded are placed on the formation which could be obscured in liquid. With the design, it is more seamless to drill horizontal well based on the advantage of loose contact of formation. There is also no chance of worn ability as there are no combustion, mechanical power exist. In the case of energy, the usage of conventional drilling accounts for 1000 𝐽/𝑐𝑚3 , where this value accounts for between 100 and 200 𝐽/𝑐𝑚3 in EIT. Last but not the least, from the test analysis, it has been yielded that it achieves the value of ROP more than 2 𝑚/ℎ𝑟 throughout 15 days. Taking everything into consideration, it is such a fact that with the comparison of data analysis, this method is highly applicable to the fields of Azerbaijan.Keywords: drilling, drill bit cost, efficiency, cost
Procedia PDF Downloads 7210497 Organizational Challenges Facing a Small Recruitment Agency: Case Study of a Firm Based in South India
Authors: Anirban Sengupta
Abstract:
The recruitment industry plays a critical role in connecting employers with talent. While there are many big recruitment firms and big organizations can also afford to have their own recruitment teams, small recruitment agencies form an essential part of the ecosystem serving a vast majority of small and medium sized clients. These clients utilize the services of the recruitment agencies to be able to scale their operations. However, there are significant organizational challenges that a small recruitment agency faces to build a sustainable and growing business. This case study explores the organizational challenges faced by a small recruitment agency in South India in an increasingly competitive landscape. Through this paper, the authors hope to understand, analyze and share the challenges faced by this firm and suggest a systematic approach to address the challenges. The study uses both qualitative and quantitative data collected from the agency’s management and employees based on the year 2024. The findings reveal that the agency struggles with limited resources, unpredictable clients, and a lack of scalable processes and systems, which impacts not only the business outcomes but also key areas like employee performance management, compensation and benefits, and employee well-being. Based on these insights, the study proposes several strategies for overcoming these challenges, such as implementing scalable systems and processes. This research contributes to the understanding of the specific obstacles faced by small recruitment agencies in regional contexts and offers actionable recommendations for improving their organizational health, which may, in turn, positively impact their competitiveness.Keywords: recruitment, organizational challenges, performance management, recruitment technology
Procedia PDF Downloads 610496 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia
Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska
Abstract:
Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).Keywords: body composition, young soccer players, BMI, physical status
Procedia PDF Downloads 39910495 Improving the Safety Performance of Workers by Assessing the Impact of Safety Culture on Workers’ Safety Behaviour in Nigeria Oil and Gas Industry: A Pilot Study in the Niger Delta Region
Authors: Efua Ehiaguina, Haruna Moda
Abstract:
Interest in the development of appropriate safety culture in the oil and gas industry has taken centre stage among stakeholders in the industry. Human behaviour has been identified as a major contributor to occupational accidents, where abnormal activities associated with safety management are taken as normal behaviour. Poor safety culture is one of the major factors that influence employee’s safety behaviour at work, which may consequently result in injuries and accidents and strengthening such a culture can improve workers safety performance. Nigeria oil and gas industry has contributed to the growth and development of the country in diverse ways. However, in terms of safety and health of workers, this industry is a dangerous place to work as workers are often exposed to occupational safety and health hazard. To ascertain the impact of employees’ safety and how it impacts health and safety compliance within the local industry, online safety culture survey targeting frontline workers within the industry was administered covering major subjects that include; perception of management commitment and style of leadership; safety communication method and its resultant impact on employees’ behaviour; employee safety commitment and training needs. The preliminary result revealed that 54% of the participants feel that there is a lack of motivation from the management to work safely. In addition, 55% of participants revealed that employers place more emphasis on work delivery over employee’s safety on the installation. It is expected that the study outcome will provide measures aimed at strengthening and sustaining safety culture in the Nigerian oil and gas industry.Keywords: oil and gas safety, safety behaviour, safety culture, safety compliance
Procedia PDF Downloads 14110494 The Feasibility of Anaerobic Digestion at 45⁰C
Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li
Abstract:
Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity
Procedia PDF Downloads 30310493 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation
Authors: A. Yanik, U. Aldemir
Abstract:
This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping
Procedia PDF Downloads 23910492 The Nexus between Manpower Training and Corporate Compliance
Authors: Timothy Wale Olaosebikan
Abstract:
The most active resource in any organization is the manpower. Every other resource remains inactive unless there is competent manpower to handle them. Manpower training is needed to enhance productivity and overall performance of the organizations. This is due to the recognition of the important role of manpower training in attainment of organizational goals. Corporate Compliance conjures visions of an incomprehensible matrix of laws and regulations that defy logic and control by even the most seasoned manpower training professionals. Similarly, corporate compliance can be viewed as one of the most significant problems faced in manpower training process for any organization, therefore, commands relevant attention and comprehension. Consequently, this study investigated the nexus between manpower training and corporate compliance. Collection of data for the study was effected through the use of questionnaire with a sample size of 265 drawn by stratified random sampling. The data were analyzed using descriptive and inferential statistics. The findings of the study show that about 75% of the respondents agree that there is a strong relationship between manpower training and corporate compliance, which brings out the organizational attainment from any training process. The findings further show that most organisation do not totally comply with the rules guiding manpower training process thereby making the process less effective on organizational performance, which may affect overall profitability. The study concludes that formulation and compliance of adequate rules and guidelines for manpower trainings will produce effective results for both employees and the organization at large. The study recommends that leaders of organizations, industries, and institutions must ensure total compliance on the part of both the employees and the organization to manpower training rules. Organizations and stakeholders should also ensure that strict policies on corporate compliance to manpower trainings form the heart of their cardinal mission.Keywords: corporate compliance, manpower training, nexus, rules and guidelines
Procedia PDF Downloads 14110491 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 2810490 Structural Characterization and Hot Deformation Behaviour of Al3Ni2/Al3Ni in-situ Core-shell intermetallic in Al-4Cu-Ni Composite
Authors: Ganesh V., Asit Kumar Khanra
Abstract:
An in-situ powder metallurgy technique was employed to create Ni-Al3Ni/Al3Ni2 core-shell-shaped aluminum-based intermetallic reinforced composites. The impact of Ni addition on the phase composition, microstructure, and mechanical characteristics of the Al-4Cu-xNi (x = 0, 2, 4, 6, 8, 10 wt.%) in relation to various sintering temperatures was investigated. Microstructure evolution was extensively examined using X-ray diffraction (XRD), scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), and transmission electron microscopy (TEM) techniques. Initially, under sintering conditions, the formation of "Single Core-Shell" structures was observed, consisting of Ni as the core with Al3Ni2 intermetallic, whereas samples sintered at 620°C exhibited both "Single Core-Shell" and "Double Core-Shell" structures containing Al3Ni2 and Al3Ni intermetallics formed between the Al matrix and Ni reinforcements. The composite achieved a high compressive yield strength of 198.13 MPa and ultimate strength of 410.68 MPa, with 24% total elongation for the sample containing 10 wt.% Ni. Additionally, there was a substantial increase in hardness, reaching 124.21 HV, which is 2.4 times higher than that of the base aluminum. Nanoindentation studies showed hardness values of 1.54, 4.65, 21.01, 13.16, 5.52, 6.27, and 8.39GPa corresponding to α-Al matrix, Ni, Al3Ni2, Ni and Al3Ni2 interface, Al3Ni, and their respective interfaces. Even at 200°C, it retained 54% of its room temperature strength (90.51 MPa). To investigate the deformation behavior of the composite material, experiments were conducted at deformation temperatures ranging from 300°C to 500°C, with strain rates varying from 0.0001s-1 to 0.1s-1. A sine-hyperbolic constitutive equation was developed to characterize the flow stress of the composite, which exhibited a significantly higher hot deformation activation energy of 231.44 kJ/mol compared to the self-diffusion of pure aluminum. The formation of Al2Cu intermetallics at grain boundaries and Al3Ni2/Al3Ni within the matrix hindered dislocation movement, leading to an increase in activation energy, which might have an adverse effect on high-temperature applications. Two models, the Strain-compensated Arrhenius model and the Artificial Neural Network (ANN) model, were developed to predict the composite's flow behavior. The ANN model outperformed the Strain-compensated Arrhenius model with a lower average absolute relative error of 2.266%, a smaller root means square error of 1.2488 MPa, and a higher correlation coefficient of 0.9997. Processing maps revealed that the optimal hot working conditions for the composite were in the temperature range of 420-500°C and strain rates between 0.0001s-1 and 0.001s-1. The changes in the composite microstructure were successfully correlated with the theory of processing maps, considering temperature and strain rate conditions. The uneven distribution in the shape and size of Core-shell/Al3Ni intermetallic compounds influenced the flow stress curves, leading to Dynamic Recrystallization (DRX), followed by partial Dynamic Recovery (DRV), and ultimately strain hardening. This composite material shows promise for applications in the automobile and aerospace industries.Keywords: core-shell structure, hot deformation, intermetallic compounds, powder metallurgy
Procedia PDF Downloads 1710489 Improving Vocabulary and Listening Comprehension via Watching French Films without Subtitles: Positive Results
Authors: Yelena Mazour-Matusevich, Jean-Robert Ancheta
Abstract:
This study is based on more than fifteen years of experience of teaching a foreign language, in my case French, to the English-speaking students. It represents a qualitative research on foreign language learners’ reaction and their gains in terms of vocabulary and listening comprehension through repeatedly viewing foreign feature films with the original sountrack but without English subtitles. The initial idea emerged upon realization that the first challenge faced by my students when they find themselves in a francophone environment has been their lack of listening comprehension. Their inability to understand colloquial speech affects not only their academic performance, but their psychological health as well. To remedy this problem, I have designed and applied for many years my own teaching method based on one particular French film, exceptionally suited, for the reasons described in detail in the paper, for the intermediate-advanced level foreign language learners. This project, conducted together with my undergraduate assistant and mentoree J-R Ancheta, aims at showing how the paralinguistic features, such as characters’ facial expressions, settings, music, historical background, images provided before the actual viewing, etc., offer crucial support and enhance students’ listening comprehension. The study, based on students’ interviews, also offers special pedagogical techniques, such as ‘anticipatory’ vocabulary lists and exercises, drills, quizzes and composition topics that have proven to boost students’ performance. For this study, only the listening proficiency and vocabulary gains of the interviewed participants were assessed.Keywords: comprehension, film, listening, subtitles, vocabulary
Procedia PDF Downloads 62310488 Response of Local Cowpea to Intra Row Spacing and Weeding Regimes in Yobe State, Nigeria
Authors: A. G. Gashua, T. T. Bello, I. Alhassan, K. K. Gwiokura
Abstract:
Weeds are known to interfere seriously with crop growth, thereby affecting the productivity and quality of crops. Crops are also known to compete for natural growth resources if they are not adequately spaced, also affecting the performance of the growing crop. Farmers grow cowpea in mixtures with cereals and this is known to affect its yield. For this reason, a field experiment was conducted at Yobe State College of Agriculture Gujba, Damaturu station in the 2014 and 2015 rainy seasons to determine the appropriate intra row spacing and weeding regime for optimum growth and yield of cowpea (Vigna unguiculata L.) in pure stand in Sudan Savanna ecology. The treatments consist of three levels of spacing within rows (20 cm, 30 cm and 40 cm) and four weeding regimes (none, once at 3 weeks after sowing (WAS), twice at 3 and 6WAS, thrice at 3WAS, 6WAS and 9WAS); arranged in a Randomized Complete Block Design (RCBD) and replicated three times. The variety used was the local cowpea variety (white, early and spreading) commonly grown by farmers. The growth and yield data were collected and subjected to analysis of variance using SAS software, and the significant means were ranked by Students Newman Keul’s test (SNK). The findings of this study revealed better crop performance in 2015 than in 2014 despite poor soil condition. Intra row spacing significantly influenced vegetative growth especially the number of main branches, leaves and canopy spread at 6WAS and 9WAS with the highest values obtained at wider spacing (40 cm). The values obtained in 2015 doubled those obtained in 2014 in most cases. Spacing also significantly affected the number of pods in 2015, seed weight in both years and grain yield in 2014 with the highest values obtained when the crop was spaced at 30-40 cm. Similarly, weeding regime significantly influenced almost all the growth attributes of cowpea with higher values obtained from where cowpea was weeded three times at 3-week intervals, though statistically similar results were obtained even from where cowpea was weeded twice. Weeding also affected the entire yield and yield components in 2015 with the highest values obtained with increase weeding. Based on these findings, it is recommended that spreading cowpea varieties should be grown at 40 cm (or wider spacing) within rows and be weeded twice at three-week intervals for better crop performance in related ecologies.Keywords: intra-row spacing, local cowpea, Nigeria, weeding
Procedia PDF Downloads 21710487 An Experimental Study on Greywater Reuse for Irrigating a Green Wall System
Authors: Mishadi Herath, Amin Talei, Andreas Hermawan, Clarina Chua
Abstract:
Green walls are vegetated structures on building’s wall that are considered as part of sustainable urban design. They are proved to have many micro-climate benefits such as reduction in indoor temperature, noise attenuation, and improvement in air quality. On the other hand, several studies have also been conducted on potential reuse of greywater in urban water management. Greywater is relatively clean when compared to blackwater; therefore, this study was aimed to assess the potential reuse of it for irrigating a green wall system. In this study, the campus of Monash University Malaysia located in Selangor state was considered as the study site where total 48 samples of greywater were collected from 7 toilets hand-wash and 5 pantries during 3 months period. The samples were tested to characterize the quality of greywater in the study site and compare it with local standard for irrigation water. PH and concentration of heavy metals, nutrients, Total Suspended Solids (TSS), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), total Coliform and E.coli were measured. Results showed that greywater could be directly used for irrigation with minimal treatment. Since the effluent of the system was supposed to be drained to stormwater drainage system, the effluent needed to meet certain quality requirement. Therefore, a biofiltration system was proposed to host the green wall plants and also treat the greywater (which is used as irrigation water) to the required level. To assess the performance of the proposed system, an experimental setup consisting of Polyvinyl Chloride (PVC) soil columns with sand-based filter media were prepared. Two different local creeper plants were chosen considering several factors including fast growth, low maintenance requirement, and aesthetic aspects. Three replicates of each plants were used to ensure the validity of the findings. The growth of creeping plants and their survivability was monitored for 6 months while monthly sampling and testing of effluent was conducted to evaluate effluent quality. An analysis was also conducted to estimate the potential cost and benefit of such system considering water and energy saving in the system. Results showed that the proposed system can work efficiently throughout a long period of time with minimal maintenance requirement. Moreover, the biofiltration-green wall system was found to be successful in reusing greywater as irrigating water while the effluent was meeting all the requirements for being drained to stormwater drainage system.Keywords: biofiltration, green wall, greywater, sustainability
Procedia PDF Downloads 21310486 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 47210485 Effect of Temperature on the Properties of Cement Paste Modified with Nanoparticles
Authors: Karine Pimenta Teixeira, Jessica Flores, Isadora PerdigãO Rocha, Leticia De Sá Carneiro, Mahsa Kamali, Ali Ghahremaninezhad
Abstract:
The advent of nanotechnology has enabled innovative solutions towards improving the behavior of infrastructure materials. Nanomaterials have the potential to revolutionize the construction industry by improving the performance and durability of construction materials, as well as imparting new functionalities to these materials. Due to variability in the environmental temperature during mixing and curing of cementitious materials in practice, it is important to understand how curing temperature influences the behavior of cementitious materials. In addition, high temperature curing is relevant in applications such as oil well cement and precast industry. Knowledge of the influence of temperature on the performance of cementitious materials modified with nanoparticles is important in the nanoengineering of cementitious materials in applications such as oil well cement and precast industry. This presentation aims to investigate the influence of temperature on the hydration, mechanical properties and durability of cementitious materials modified with TiO2 nanoparticles. It was found that temperature improved the early hydration. The cement pastes cured at high temperatures showed an increase in the compressive strength at early age but the strength gain decreased at late ages. The electrical resistivity of the cement pastes cured at high temperatures was shown to decrease more noticeably at late ages compared to that of the room temperature cured cement paste. SEM examination indicated that hydration product was more uniformly distributed in the microstructure of the cement paste cured at room temperature compared to the cement pastes cured at high temperature.Keywords: cement paste, nanoparticles, temperature, hydration
Procedia PDF Downloads 31510484 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass
Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski
Abstract:
The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals
Procedia PDF Downloads 43210483 Exercise Training for Management Hypertensive Patients: A Systematic Review and Meta-Analysis
Authors: Noor F. Ilias, Mazlifah Omar, Hashbullah Ismail
Abstract:
Exercise training has been shown to improve functional capacity and is recommended as a therapy for management of blood pressure. Our purpose was to establish whether different exercise capacity produces different effect size for Cardiorespiratory Fitness (CRF), systolic (SBP) and diastolic (DBP) blood pressure in patients with hypertension. Exercise characteristic is required in order to have optimal benefit from the training, but optimal exercise capacity is still unwarranted. A MEDLINE search (1985 to 2015) was conducted for exercise based rehabilitation trials in hypertensive patients. Thirty-seven studies met the selection criteria. Of these, 31 (83.7%) were aerobic exercise and 6 (16.3%) aerobic with additional resistance exercise, providing a total of 1318 exercise subjects and 819 control, the total of subjects was 2137. We calculated exercise volume and energy expenditure through the description of exercise characteristics. 4 studies (18.2%) were 451kcal - 900 kcal, 12 (54.5%) were 900 kcal – 1350 kcal and 6 (27.3%) >1351kcal per week. Peak oxygen consumption (peak VO2) increased by mean difference of 1.44 ml/kg/min (95% confidence interval [CI]: 1.08 to 1.79 ml/kg/min; p = 0.00001) with weighted mean 21.2% for aerobic exercise compare to aerobic with additional resistance exercise 4.50 ml/kg/min (95% confidence interval [CI]: 3.57 to 5.42 ml/kg/min; p = 0.00001) with weighted mean 14.5%. SBP was clinically reduce for both aerobic and aerobic with resistance training by mean difference of -4.66 mmHg (95% confidence interval [CI]: -5.68 to -3.63 mmHg; p = 0.00001) weighted mean 6% reduction and -5.06 mmHg (95% confidence interval [CI]: -7.32 to -2.8 mmHg; p = 0.0001) weighted mean 5% reduction respectively. Result for DBP was clinically reduce for aerobic by mean difference of -1.62 mmHg (95% confidence interval [CI]: -2.09 to -1.15 mmHg; p = 0.00001) weighted mean 4% reduction and aerobic with resistance training reduce by mean difference of -3.26 mmHg (95% confidence interval [CI]: -4.87 to -1.65 mmHg; p = 0.0001) weighted mean 6% reduction. Optimum exercise capacity for 451 kcal – 900 kcal showed greater improvement in peak VO2 and SBP by 2.76 ml/kg/min (95% confidence interval [CI]: 1.47 to 4.05 ml/kg/min; p = 0.0001) with weighted mean 40.6% and -16.66 mmHg (95% confidence interval [CI]: -21.72 to -11.60 mmHg; p = 0.00001) weighted mean 9.8% respectively. Our data demonstrated that aerobic exercise with total volume of 451 kcal – 900 kcal/ week energy expenditure may elicit greater changes in cardiorespiratory fitness and blood pressure in hypertensive patients. Higher exercise capacity weekly does not seem better result in management hypertensive patients.Keywords: blood Pressure, exercise, hypertension, peak VO2
Procedia PDF Downloads 28110482 An Object-Based Image Resizing Approach
Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai
Abstract:
Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.Keywords: energy map, visual saliency, gradient map, seam carving
Procedia PDF Downloads 47510481 Evaluation of Pile Performance in Different Layers of Soil
Authors: Orod Zarrin, Mohesn Ramezan Shirazi, Hassan Moniri
Abstract:
The use of pile foundations technique is developed to support structures and buildings on soft soil. The most important dynamic load that can affect the pile structure is earthquake vibrations. Pile foundations during earthquake excitation indicate that piles are subject to damage by affecting the superstructure integrity and serviceability. During an earthquake, two types of stresses can damage the pile head, inertial load that is caused by superstructure and deformation which caused by the surrounding soil. Soil deformation and inertial load are associated with the acceleration developed in an earthquake. The acceleration amplitude at the ground surface depends on the magnitude of earthquakes, soil properties and seismic source distance. According to the investigation, the damage is between the liquefiable and non-liquefiable layers and also soft and stiff layers. This damage crushes the pile head by increasing the inertial load which is applied by the superstructure. On the other hand, the cracks on the piles due to the surrounding soil are directly related to the soil profile and causes cracks from small to large. However, the large cracks reason have been listed such as liquefaction, lateral spreading, and inertial load. In the field of designing, elastic response of piles is always a challenge for designer in liquefaction soil, by allowing deflection at top of piles. Moreover, absence of plastic hinges in piles should be insured, because the damage in the piles is not observed directly. In this study, the performance and behavior of pile foundations during liquefaction and lateral spreading are investigated. In addition, emphasize on the soil behavior in the liquefiable and non-liquefiable layers by different aspect of piles damage such as ranking, location and degree of damage are going to discuss.Keywords: pile, earthquake, liquefaction, non-liquefiable, damage
Procedia PDF Downloads 30010480 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System
Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock
Abstract:
The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription-Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the Roche assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the Roche assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.Keywords: HIV viral load, Aptima, Roche, Panther system
Procedia PDF Downloads 37210479 Performance Analysis of VoIP Coders for Different Modulations Under Pervasive Environment
Authors: Jasbinder Singh, Harjit Pal Singh, S. A. Khan
Abstract:
The work, in this paper, presents the comparison of encoded speech signals by different VoIP narrow-band and wide-band codecs for different modulation schemes. The simulation results indicate that codec has an impact on the speech quality and also effected by modulation schemes.Keywords: VoIP, coders, modulations, BER, MOS
Procedia PDF Downloads 51410478 Cyclic Stress and Masing Behaviour of Modified 9Cr-1Mo at RT and 300 °C
Authors: Preeti Verma, P. Chellapandi, N.C. Santhi Srinivas, Vakil Singh
Abstract:
Modified 9Cr-1Mo steel is widely used for structural components like heat exchangers, pressure vessels and steam generator in the nuclear reactors. It is also found to be a candidate material for future metallic fuel sodium cooled fast breeder reactor because of its high thermal conductivity, lower thermal expansion coefficient, micro structural stability, high irradiation void swelling resistance and higher resistance to stress corrosion cracking in water-steam systems compared to austenitic stainless steels. The components of steam generators that operate at elevated temperatures are often subjected to repeated thermal stresses as a result of temperature gradients which occur on heating and cooling during start-ups and shutdowns or during variations in operating conditions of a reactor. These transient thermal stresses give rise to LCF damage. In the present investigation strain controlled low cycle fatigue tests were conducted at room temperature and 300 °C in normalized and tempered condition using total strain amplitudes in the range from ±0.25% to ±0.5% at strain rate of 10-2 s-1. Cyclic Stress response at high strain amplitudes (±0.31% to ±0.5%) showed initial softening followed by hardening upto a few cycles and subsequent softening till failure. The extent of softening increased with increase in strain amplitude and temperature. Depends on the strain amplitude of the test the stress strain hysteresis loops displayed Masing behaviour at higher strain amplitudes and non-Masing at lower strain amplitudes at both the temperatures. It is quite opposite to the usual Masing and Non-Masing behaviour reported earlier for different materials. Low cycle fatigue damage was evaluated in terms of plastic strain and plastic strain energy approach at room temperature and 300 °C. It was observed that the plastic strain energy approach was found to be more closely matches with the experimental fatigue lives particularly, at 300 °C where dynamic strain aging was observed.Keywords: Modified 9Cr-mo steel, low cycle fatigue, Masing behavior, cyclic softening
Procedia PDF Downloads 44310477 Modelling Optimal Control of Diabetes in the Workplace
Authors: Eunice Christabel Chukwu
Abstract:
Introduction: Diabetes is a chronic medical condition which is characterized by high levels of glucose in the blood and urine; it is usually diagnosed by means of a glucose tolerance test (GTT). Diabetes can cause a range of health problems if left unmanaged, as it can lead to serious complications. It is essential to manage the condition effectively, particularly in the workplace where the impact on work productivity can be significant. This paper discusses the modelling of optimal control of diabetes in the workplace using a control theory approach. Background: Diabetes mellitus is a condition caused by too much glucose in the blood. Insulin, a hormone produced by the pancreas, controls the blood sugar level by regulating the production and storage of glucose. In diabetes, there may be a decrease in the body’s ability to respond to insulin or a decrease in insulin produced by the pancreas which will lead to abnormalities in the metabolism of carbohydrates, proteins, and fats. In addition to the health implications, the condition can also have a significant impact on work productivity, as employees with uncontrolled diabetes are at risk of absenteeism, reduced performance, and increased healthcare costs. While several interventions are available to manage diabetes, the most effective approach is to control blood glucose levels through a combination of lifestyle modifications and medication. Methodology: The control theory approach involves modelling the dynamics of the system and designing a controller that can regulate the system to achieve optimal performance. In the case of diabetes, the system dynamics can be modelled using a mathematical model that describes the relationship between insulin, glucose, and other variables. The controller can then be designed to regulate the glucose levels to maintain them within a healthy range. Results: The modelling of optimal control of diabetes in the workplace using a control theory approach has shown promising results. The model has been able to predict the optimal dose of insulin required to maintain glucose levels within a healthy range, taking into account the individual’s lifestyle, medication regimen, and other relevant factors. The approach has also been used to design interventions that can improve diabetes management in the workplace, such as regular glucose monitoring and education programs. Conclusion: The modelling of optimal control of diabetes in the workplace using a control theory approach has significant potential to improve diabetes management and work productivity. By using a mathematical model and a controller to regulate glucose levels, the approach can help individuals with diabetes to achieve optimal health outcomes while minimizing the impact of the condition on their work performance. Further research is needed to validate the model and develop interventions that can be implemented in the workplace.Keywords: mathematical model, blood, insulin, pancreas, model, glucose
Procedia PDF Downloads 5910476 Laser Beam Bending via Lenses
Authors: Remzi Yildirim, Fatih. V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 48610475 Dominant Correlation Effects in Atomic Spectra
Authors: Hubert Klar
Abstract:
High double excitation of two-electron atoms has been investigated using hyperpherical coordinates within a modified adiabatic expansion technique. This modification creates a novel fictitious force leading to a spontaneous exchange symmetry breaking at high double excitation. The Pauli principle must therefore be regarded as approximation valid only at low excitation energy. Threshold electron scattering from high Rydberg states shows an unexpected time reversal symmetry breaking. At threshold for double escape we discover a broad (few eV) Cooper pair.Keywords: correlation, resonances, threshold ionization, Cooper pair
Procedia PDF Downloads 34610474 Laser Light Bending via Lenses
Authors: Remzi Yildirim, Fatih V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 700