Search results for: noise field measurement
10032 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 9210031 The Breast Surgery Movement: A 50 Year Development of the Surgical Specialty
Authors: Lauren Zammerilla Westcott, Ronald C. Jones, James W. Fleshman
Abstract:
The surgical treatment of breast cancer has rapidly evolved over the past 50 years, progressing from Halsted’s radical mastectomy to a public campaign of surgical options, aesthetic reconstruction, and patient empowerment. This article examines the happenings that led to the transition of breast surgery as a subset of general surgery to its own specialized field. Sparked by the research of Dr. Bernard Fisher and the first National Surgical Adjuvant Breast and Bowel Project trial in 1971, the field of breast surgery underwent significant growth over the next several decades, enabling general surgeons to limit their practices to the breast. High surgical volumes eventually led to the development of the first formal breast surgical oncology fellowship in a large community-based hospital at Baylor University Medical Center in 1982. The establishment of the American Society of Breast Surgeons, as well several landmark clinical trials and public campaign efforts, further contributed to the advancement of breast surgery, making it the specialized field of the current era.Keywords: breast cancer, breast fellowship, breast surgery, surgical history
Procedia PDF Downloads 13310030 The Superhydrophobic Surface Effect on Laminar Boundary Layer Flows
Authors: Chia-Yung Chou, Che-Chuan Cheng, Chin Chi Hsu, Chun-Hui Wu
Abstract:
This study investigates the fluid of boundary layer flow as it flows through the superhydrophobic surface. The superhydrophobic surface will be assembled into an observation channel for fluid experiments. The fluid in the channel will be doped with visual flow field particles, which will then be pumped by the syringe pump and introduced into the experimentally observed channel through the pipeline. Through the polarized light irradiation, the movement of the particles in the channel is captured by a high-speed camera, and the velocity of the particles is analyzed by MATLAB to find out the particle velocity field changes caused on the fluid boundary layer. This study found that the superhydrophobic surface can effectively increase the velocity near the wall surface, and the faster with the flow rate increases. The superhydrophobic surface also had longer the slip length compared with the plan surface. In the calculation of the drag coefficient, the superhydrophobic surface produces a lower drag coefficient, and there is a more significant difference when the Re reduced in the flow field.Keywords: hydrophobic, boundary layer, slip length, friction
Procedia PDF Downloads 14610029 Tip-Enhanced Raman Spectroscopy with Plasmonic Lens Focused Longitudinal Electric Field Excitation
Authors: Mingqian Zhang
Abstract:
Tip-enhanced Raman spectroscopy (TERS) is a scanning probe technique for individual objects and structured surfaces investigation that provides a wealth of enhanced spectral information with nanoscale spatial resolution and high detection sensitivity. It has become a powerful and promising chemical and physical information detection method in the nanometer scale. The TERS technique uses a sharp metallic tip regulated in the near-field of a sample surface, which is illuminated with a certain incident beam meeting the excitation conditions of the wave-vector matching. The local electric field, and, consequently, the Raman scattering, from the sample in the vicinity of the tip apex are both greatly tip-enhanced owning to the excitation of localized surface plasmons and the lightning-rod effect. Typically, a TERS setup is composed of a scanning probe microscope, excitation and collection optical configurations, and a Raman spectroscope. In the illumination configuration, an objective lens or a parabolic mirror is always used as the most important component, in order to focus the incident beam on the tip apex for excitation. In this research, a novel TERS setup was built up by introducing a plasmonic lens to the excitation optics as a focusing device. A plasmonic lens with symmetry breaking semi-annular slits corrugated on gold film was designed for the purpose of generating concentrated sub-wavelength light spots with strong longitudinal electric field. Compared to conventional far-field optical components, the designed plasmonic lens not only focuses an incident beam to a sub-wavelength light spot, but also realizes a strong z-component that dominants the electric field illumination, which is ideal for the excitation of tip-enhancement. Therefore, using a PL in the illumination configuration of TERS contributes to improve the detection sensitivity by both reducing the far-field background and effectively exciting the localized electric field enhancement. The FDTD method was employed to investigate the optical near-field distribution resulting from the light-nanostructure interaction. And the optical field distribution was characterized using an scattering-type scanning near-field optical microscope to demonstrate the focusing performance of the lens. The experimental result is in agreement with the theoretically calculated one. It verifies the focusing performance of the plasmonic lens. The optical field distribution shows a bright elliptic spot in the lens center and several arc-like side-lobes on both sides. After the focusing performance was experimentally verified, the designed plasmonic lens was used as a focusing component in the excitation configuration of TERS setup to concentrate incident energy and generate a longitudinal optical field. A collimated linearly polarized laser beam, with along x-axis polarization, was incident from the bottom glass side on the plasmonic lens. The incident light focused by the plasmonic lens interacted with the silver-coated tip apex and enhanced the Raman signal of the sample locally. The scattered Raman signal was gathered by a parabolic mirror and detected with a Raman spectroscopy. Then, the plasmonic lens based setup was employed to investigate carbon nanotubes and TERS experiment was performed. Experimental results indicate that the Raman signal is considerably enhanced which proves that the novel TERS configuration is feasible and promising.Keywords: longitudinal electric field, plasmonics, raman spectroscopy, tip-enhancement
Procedia PDF Downloads 37310028 Estimating the Effect of Fluid in Pressing Process
Authors: A. Movaghar, R. A. Mahdavinejad
Abstract:
To analyze the effect of various parameters of fluid on the material properties such as surface and depth defects and/or cracks, it is possible to determine the affection of pressure field on these specifications. Stress tensor analysis is also able to determine the points in which the probability of defection creation is more. Besides, from pressure field, it is possible to analyze the affection of various fluid specifications such as viscosity and density on defect created in the material. In this research, the concerned boundary conditions are analyzed first. Then the solution network and stencil used are mentioned. With the determination of relevant equation on the fluid flow between notch and matrix and their discretion according to the governed boundary conditions, these equations can be solved. Finally, with the variation creations on fluid parameters such as density and viscosity, the affection of these variations can be determined on pressure field. In this direction, the flowchart and solution algorithm with their results as vortex and current function contours for two conditions with most applications in pressing process are introduced and discussed.Keywords: pressing, notch, matrix, flow function, vortex
Procedia PDF Downloads 29010027 Multiple Institutional Logics and the Ability of Institutional Entrepreneurs: An Analysis in the Turkish Education Field
Authors: Miraç Savaş Turhan, Ali Danişman
Abstract:
Recently scholars of new institutional theory have used institutional logics perspective to explain the contradictory practices in modern western societies. Accordingly, distinct institutional logics are embedded in central institutions such as the market, state, democracy, family, and religion. They guide individual and organizational actors and constraint their behaviors in a particular organizational field. Through this perspective, actors are assumed to have a situated, embedded, boundedly intentional, and adaptive role against the structure in social, cultural and political context. On the other hand, over a decade, there is an emerging attempt focusing on the role of actors on creating, maintaining, and changing the institutions. Such attempts brought out the concept of institutional entrepreneurs to explain the role of individual actors in relation to institutions. Institutional entrepreneurs are individuals, groups of individuals, organizations or groups of organizations that are able to initiate some actions to build, maintain or change institutions. While recent studies on institutional logics perspective have attempted to explain roles of entrepreneurial actors who have resources and skills, little is known about the effects of multiple institutional logics on the ability of institutional entrepreneurs. In this study, we aim to find out that how multiple institutional logics affect the ability of institutional entrepreneurs during the process of institutional change. We examine this issue in the Turkish Education Field. While institutional logics were identified based on the previous studies in the education field, the actions taken by Turkish National Education Ministry from 2003 to 2013 was examined through content analysis The early results indicate that there are remarkable shift and contradictions in the ability of institutional entrepreneur in taking actions to change the field in relationship to balance of power shift among the carriers of institutional logics.Keywords: institutional theory, institutional logics, institutional entrepreneurs, Turkish national education
Procedia PDF Downloads 35210026 Prototype of Over Dimension Over Loading (ODOL) Freight Transportation Monitoring System Based on Arduino Mega 'Sabrang': A Case Study in Klaten, Indonesia
Authors: Chairul Fajar, Muhammad Nur Hidayat, Muksalmina
Abstract:
The issue of Over Dimension Over Loading (ODOL) in Indonesia remains a significant challenge, causing traffic accidents, disrupting traffic flow, accelerating road damage, and potentially leading to bridge collapses. Klaten Regency, located on the slopes of Mount Merapi along the Woro River in Kemalang District, has potential Class C excavation materials such as sand and stone. Data from the Klaten Regency Transportation Department indicates that ODOL violations account for 72%, while non-violating vehicles make up only 28%. ODOL involves modifying factory-standard vehicles beyond the limits specified in the Type Test Registration Certificate (SRUT) to save costs and travel time. This study aims to develop a prototype ‘Sabrang’ monitoring system based on Arduino Mega to control and monitor ODOL freight transportation in the mining of Class C excavation materials in Klaten Regency. The prototype is designed to automatically measure the dimensions and weight of objects using a microcontroller. The data analysis techniques used in this study include the Normality Test and Paired T-Test, comparing sensor measurement results on scaled objects. The study results indicate differences in measurement validation under room temperature and ambient temperature conditions. Measurements at room temperature showed that the majority of H0 was accepted, meaning there was no significant difference in measurements when the prototype tool was used. Conversely, measurements at ambient temperature showed that the majority of H0 was rejected, indicating a significant difference in measurements when the prototype tool was used. In conclusion, the ‘Sabrang’ monitoring system prototype is effective for controlling ODOL, although measurement results are influenced by temperature conditions. This study is expected to assist in the monitoring and control of ODOL, thereby enhancing traffic safety and road infrastructure.Keywords: over dimension over loading, prototype, microcontroller, Arduino, normality test, paired t-test
Procedia PDF Downloads 3410025 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis
Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu
Abstract:
Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.Keywords: GPT, phantom-less QCT, large language model, osteoporosis
Procedia PDF Downloads 7110024 Magnetohydrodynamic Flows in a Conduit with Multiple Channels under a Magnetic Field Applied Perpendicular to the Plane of Flow
Authors: Yang Luo, Chang Nyung Kim
Abstract:
This study numerically analyzes a steady-state, three-dimensional liquid-metal magnetohydrodynamic flows in a conduit with multiple channels under a uniform magnetic field. The geometry of the conduit is of a four-parallel-channels system including one inflow channel and three outflow channels. The liquid-metal flows in the inflow channel, then turns 1800 in the transition segment, finally flows into three different outflow channels simultaneously. This kind of channel system can induce counter flow and co-flow, which is rarely investigated before. The axial velocity in the side layer near the first partitioning wall, which is located between the inflow channel and the first outflow channel, is the highest. ‘M-shaped’ velocity profiles are obtained in the side layers of the inflow and outflow channels. The interdependency of the current, fluid velocity, pressure, electric potential is examined in order to describe the electromagnetic characteristics of the liquid-metal flows.Keywords: liquid-metal, multiple channels, magnetic field, magnetohydrodynamic
Procedia PDF Downloads 28110023 Vibration Propagation in Structures Through Structural Intensity Analysis
Authors: Takhchi Jamal, Ouisse Morvan, Sadoulet-Reboul Emeline, Bouhaddi Noureddine, Gagliardini Laurent, Bornet Frederic, Lakrad Faouzi
Abstract:
Structural intensity is a technique that can be used to indicate both the magnitude and direction of power flow through a structure from the excitation source to the dissipation sink. However, current analysis is limited to the low frequency range. At medium and high frequencies, a rotational component appear in the field, masking the energy flow and make its understanding difficult or impossible. The objective of this work is to implement a methodology to filter out the rotational components of the structural intensity field in order to fully understand the energy flow in complex structures. The approach is based on the Helmholtz decomposition. It allows to decompose the structural intensity field into rotational, irrotational, and harmonic components. Only the irrotational component is needed to describe the net power flow from a source to a dissipative zone in the structure. The methodology has been applied on academic structures, and it allows a good analysis of the energy transfer paths.Keywords: structural intensity, power flow, helmholt decomposition, irrotational intensity
Procedia PDF Downloads 17810022 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 25610021 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors
Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal
Abstract:
Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance
Procedia PDF Downloads 40110020 Spatial and Time Variability of Ambient Vibration H/V Frequency Peak
Authors: N. Benkaci, E. Oubaiche, J.-L. Chatelain, R. Bensalem, K. Abbes
Abstract:
The ambient vibration H/V technique is widely used nowadays in microzonation studies, because of its easy field handling and its low cost, compared to other geophysical methods. However, in presence of complex geology or lateral heterogeneity evidenced by more than one peak frequency in the H/V curve, it is difficult to interpret the results, especially when soil information is lacking. In this work, we focus on the construction site of the Baraki 40000=place stadium, located in the north-east side of the Mitidja basin (Algeria), to identify the seismic wave amplification zones. H/V curve analysis leads to the observation of spatial and time variability of the H/V frequency peaks. The spatial variability allows dividing the studied area into three main zones: (1) one with a predominant frequency around 1,5 Hz showing an important amplification level, (2) the second exhibits two peaks at 1,5 Hz and in the 4 Hz – 10 Hz range, and (3) the third zone is characterized by a plateau between 2 Hz and 3 Hz. These H/V curve categories reveal a consequent lateral heterogeneity dividing the stadium site roughly in the middle. Furthermore, a continuous ambient vibration recording during several weeks allows showing that the first peak at 1,5 Hz in the second zone, completely disappears between 2 am and 4 am, and reaching its maximum amplitude around 12 am. Consequently, the anthropogenic noise source generating these important variations could be the Algiers Rocade Sud highway, located in the maximum amplification azimuth direction of the H/V curves. This work points out that the H/V method is an important tool to perform nano-zonation studies prior to geotechnical and geophysical investigations, and that, in some cases, the H/V technique fails to reveal the resonance frequency in the absence of strong anthropogenic source.Keywords: ambient vibrations, amplification, fundamental frequency, lateral heterogeneity, site effect
Procedia PDF Downloads 23710019 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study
Authors: Colin Smith, Linsey S Passarella
Abstract:
Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy
Procedia PDF Downloads 13310018 Sport Motivation and the Control Center of Football Players of Iran
Authors: Khaidan Hatami, Mehran Nasiri
Abstract:
The aim of following research was the analysis between sport motivation and control center of football players of Iran. All the players employed in Iran’s football league are included in the population of the research. So, 360 players, every level 120 players ( Youth, U-21 and adults ) playing in Guilan, Kurdistan and Kermanshah province having professional football league in first and second level league were randomly and selectively taken and included the population. The current research is of descriptive and solidarity types. Instruments of measurement are three personal questionnaires, sport motivation (SMS) of Politer and partners (1995), control center of Berger (1986) which their valid content were confirmed by experts in sport management field. The internal stability of questions were analyzed by Alfa Cronbach respectively for sport obligation questionnaire (0.82) and control center (0.86) to analysis and evaluate data, Kolmogrouf-Smirnov, Spearman Correlation, Kruskal-Wallis test, Whitney U, Freedman and T-Wilcoxon were used in a meaningful level (P ≤ 0/05). The results showed positive and meaningful relation between control center of football players in youth, U-21 and adults and sport motivation of football players. So, it can be concluded, people with internal control against those with external one have more internal sport motivation and follow the team goals with more mental power. So, it’s recommended to coaches to use sport psychologist in their teams to internalize the people’s needs by scientific method by taking the mental issues and the type of control in people on life events.Keywords: sport motivation, control center, internal, external football players
Procedia PDF Downloads 48110017 Laser Beam Bending via Lenses
Authors: Remzi Yildirim, Fatih. V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 48810016 Laser Light Bending via Lenses
Authors: Remzi Yildirim, Fatih V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 70310015 Effective Planning of Public Transportation Systems: A Decision Support Application
Authors: Ferdi Sönmez, Nihal Yorulmaz
Abstract:
Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.Keywords: operator cost, bi-level optimization model, user cost, urban transportation
Procedia PDF Downloads 24610014 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 28310013 Effect of Wetting Layer on the Energy Spectrum of One-Electron Non-Uniform Quantum Ring
Authors: F. A. Rodríguez-Prada, W Gutierrez, I. D. Mikhailov
Abstract:
We study the spectral properties of one-electron non-uniform crater-shaped quantum dot whose thickness is increased linearly with different slopes in different radial directions between the central hole and the outer border and which is deposited over thin wetting layer in the presence of the external vertically directed magnetic field. We show that in the adiabatic limit, when the crater thickness is much smaller than its lateral dimension, the one-particle wave functions of the electron confined in such structure in the zero magnetic field case can be found exactly in an analytical form and they can be used subsequently as the base functions in framework of the exact diagonalization method to study the effect of the wetting layer and an external magnetic field applied along of the grown axis on energy levels of one-electron non-uniform quantum dot. It is shown that both the structural non-uniformity and the increase of the thickness of the wetting layer provide a quenching of the Aharonov-Bohm oscillations of the lower energy levels.Keywords: electronic properties, quantum rings, volcano shaped, wetting layer
Procedia PDF Downloads 38610012 Field-observed Thermal Fractures during Reinjection and Its Numerical Simulation
Authors: Wen Luo, Phil J. Vardon, Anne-Catherine Dieudonne
Abstract:
One key process that partly controls the success of geothermal projects is fluid reinjection, which benefits in dealing with waste water, maintaining reservoir pressure, and supplying heat-exchange media, etc. Thus, sustaining the injectivity is of great importance for the efficiency and sustainability of geothermal production. However, the injectivity is sensitive to the reinjection process. Field experiences have illustrated that the injectivity can be damaged or improved. In this paper, the focus is on how the injectivity is improved. Since the injection pressure is far below the formation fracture pressure, hydraulic fracturing cannot be the mechanism contributing to the increase in injectivity. Instead, thermal stimulation has been identified as the main contributor to improving the injectivity. For low-enthalpy geothermal reservoirs, which are not fracture-controlled, thermal fracturing, instead of thermal shearing, is expected to be the mechanism for increasing injectivity. In this paper, field data from the sedimentary low-enthalpy geothermal reservoirs in the Netherlands were analysed to show the occurrence of thermal fracturing due to the cooling shock during reinjection. Injection data were collected and compared to show the effects of the thermal fractures on injectivity. Then, a thermo-hydro-mechanical (THM) model for the near field formation was developed and solved by finite element method to simulate the observed thermal fractures. It was then compared with the HM model, decomposed from the THM model, to illustrate the thermal effects on thermal fracturing. Finally, the effects of operational parameters, i.e. injection temperature and pressure, on the changes in injectivity were studied on the basis of the THM model. The field data analysis and simulation results illustrate that the thermal fracturing occurred during reinjection and contributed to the increase in injectivity. The injection temperature was identified as a key parameter that contributes to thermal fracturing.Keywords: injectivity, reinjection, thermal fracturing, thermo-hydro-mechanical model
Procedia PDF Downloads 21710011 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices
Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto
Abstract:
The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem
Procedia PDF Downloads 11910010 Abilitest Battery: Presentation of Tests and Psychometric Properties
Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański
Abstract:
Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory
Procedia PDF Downloads 10510009 A Micro-Scale of Electromechanical System Micro-Sensor Resonator Based on UNO-Microcontroller for Low Magnetic Field Detection
Authors: Waddah Abdelbagi Talha, Mohammed Abdullah Elmaleeh, John Ojur Dennis
Abstract:
This paper focuses on the simulation and implementation of a resonator micro-sensor for low magnetic field sensing based on a U-shaped cantilever and piezoresistive configuration, which works based on Lorentz force physical phenomena. The resonance frequency is an important parameter that depends upon the highest response and sensitivity through the frequency domain (frequency response) of any vibrated micro-scale of an electromechanical system (MEMS) device. And it is important to determine the direction of the detected magnetic field. The deflection of the cantilever is considered for vibrated mode with different frequencies in the range of (0 Hz to 7000 Hz); for the purpose of observing the frequency response. A simple electronic circuit-based polysilicon piezoresistors in Wheatstone's bridge configuration are used to transduce the response of the cantilever to electrical measurements at various voltages. Microcontroller-based Arduino program and PROTEUS electronic software are used to analyze the output signals from the sensor. The highest output voltage amplitude of about 4.7 mV is spotted at about 3 kHz of the frequency domain, indicating the highest sensitivity, which can be called resonant sensitivity. Based on the resonant frequency value, the mode of vibration is determined (up-down vibration), and based on that, the vector of the magnetic field is also determined.Keywords: resonant frequency, sensitivity, Wheatstone bridge, UNO-microcontroller
Procedia PDF Downloads 12710008 An Investigation of the Quantitative Correlation between Urban Spatial Morphology Indicators and Block Wind Environment
Authors: Di Wei, Xing Hu, Yangjun Chen, Baofeng Li, Hong Chen
Abstract:
To achieve the research purpose of guiding the spatial morphology design of blocks through the indicators to obtain a good wind environment, it is necessary to find the most suitable type and value range of each urban spatial morphology indicator. At present, most of the relevant researches is based on the numerical simulation of the ideal block shape and rarely proposes the results based on the complex actual block types. Therefore, this paper firstly attempted to make theoretical speculation on the main factors influencing indicators' effectiveness by analyzing the physical significance and formulating the principle of each indicator. Then it was verified by the field wind environment measurement and statistical analysis, indicating that Porosity(P₀) can be used as an important indicator to guide the design of block wind environment in the case of deep street canyons, while Frontal Area Density (λF) can be used as a supplement in the case of shallow street canyons with no height difference. Finally, computational fluid dynamics (CFD) was used to quantify the impact of block height difference and street canyons depth on λF and P₀, finding the suitable type and value range of λF and P₀. This paper would provide a feasible wind environment index system for urban designers.Keywords: urban spatial morphology indicator, urban microclimate, computational fluid dynamics, block ventilation, correlation analysis
Procedia PDF Downloads 13710007 Performance of an Automotive Engine Running on Gasoline-Condensate Blends
Authors: Md. Ehsan, Cyrus Ashok Arupratan Atis
Abstract:
Significantly lower cost, bulk availability, absence of identification color additives and relative ease of mixing with fuels have made gas-field condensates a lucrative option as adulterant for gasoline in Bangladesh. Widespread adulteration of fuels with gas-field condensates being a problem existing mainly in developing countries like Bangladesh, Nigeria etc., research works regarding the effect of such fuel adulteration are very limited. Since the properties of the gas-field condensate vary widely depending on geographical location, studies need to be based on local condensate feeds. This study quantitatively evaluates the effects of blending of gas-field condensates with gasoline(octane) in terms of - fuel properties, engine performance and exhaust emission. Condensate samples collected from Kailashtila gas field were blended with octane, ranging from 30% to 75% by volume. However for blends with above 60% condensate, cold starting of engine became difficult. Investigation revealed that the condensate samples had significantly higher distillation temperatures compared to octane, but were not far different in terms of heating value and carbon residues. Engine tests showed Kailashtila blends performing quite similar to octane in terms of power and thermal efficiency. No noticeable knocking was observed from in-cylinder pressure traces. For all the gasoline-condensate blends the test engine ran with relatively leaner air-fuel mixture delivering slightly lower CO emissions but HC and NOx emissions were similar to octane. Road trials of a test vehicle in real traffic condition and on a standard gradient using 50%(v/v) gasoline-condensate blend were also carried out. The test vehicle did not exhibit any noticeable difference in drivability compared to octane.Keywords: condensates, engine performance, fuel adulteration, gasoline-condensate blends
Procedia PDF Downloads 25110006 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes
Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai
Abstract:
Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)
Procedia PDF Downloads 14210005 Simulation Analysis and Control of the Temperature Field in an Induction Furnace Based on Various Parameters
Authors: Sohaibullah Zarghoon, Syed Yousaf, Cyril Belavy, Stanislav Duris, Samuel Emebu, Radek Matusu
Abstract:
Induction heating is extensively employed in industrial furnaces due to its swift response and high energy efficiency. Designing and optimising these furnaces necessitates the use of computer-aided simulations. This study aims to develop an accurate temperature field model for a rectangular steel billet in an induction furnace by leveraging various parameters in COMSOL Multiphysics software. The simulation analysis incorporated temperature dynamics, considering skin depth, temperature-dependent, and constant parameters of the steel billet. The resulting data-driven model was transformed into a state-space model using MATLAB's System Identification Toolbox for the purpose of designing a linear quadratic regulator (LQR). This controller was successfully implemented to regulate the core temperature of the billet from 1000°C to 1200°C, utilizing the distributed parameter system circuit.Keywords: induction heating, LQR controller, skin depth, temperature field
Procedia PDF Downloads 4110004 Guided Energy Theory of a Particle: Answered Questions Arise from Quantum Foundation
Authors: Desmond Agbolade Ademola
Abstract:
This work aimed to introduce a theory, called Guided Energy Theory of a particle that answered questions that arise from quantum foundation, quantum mechanics theory, and interpretation such as: what is nature of wavefunction? Is mathematical formalism of wavefunction correct? Does wavefunction collapse during measurement? Do quantum physical entanglement and many world interpretations really exist? In addition, is there uncertainty in the physical reality of our nature as being concluded in the Quantum theory? We have been able to show by the fundamental analysis presented in this work that the way quantum mechanics theory, and interpretation describes nature is not correlated with physical reality. Because, we discovered amongst others that, (1) Guided energy theory of a particle fundamentally provides complete physical observable series of quantized measurement of a particle momentum, force, energy e.t.c. in a given distance and time.In contrast, quantum mechanics wavefunction describes that nature has inherited probabilistic and indeterministic physical quantities, resulting in unobservable physical quantities that lead to many worldinterpretation.(2) Guided energy theory of a particle fundamentally predicts that it is mathematically possible to determine precise quantized measurementof position and momentum of a particle simultaneously. Because, there is no uncertainty in nature; nature however naturally guides itself against uncertainty. Contrary to the conclusion in quantum mechanics theory that, it is mathematically impossible to determine the position and the momentum of a particle simultaneously. Furthermore, we have been able to show by this theory that, it is mathematically possible to determine quantized measurement of force acting on a particle simultaneously, which is not possible on the premise of quantum mechanics theory. (3) It is evidently shown by our theory that, guided energy does not collapse, only describes the lopsided nature of a particle behavior in motion. This pretty offers us insight on gradual process of engagement - convergence and disengagement – divergence of guided energy holders which further highlight the picture how wave – like behavior return to particle-like behavior and how particle – like behavior return to wave – like behavior respectively. This further proves that the particles’ behavior in motion is oscillatory in nature. The mathematical formalism of Guided energy theory shows that nature is certainty whereas the mathematical formalism of Quantum mechanics theory shows that nature is absolutely probabilistics. In addition, the nature of wavefunction is the guided energy of the wave. In conclusion, the fundamental mathematical formalism of Quantum mechanics theory is wrong.Keywords: momentum, physical entanglement, wavefunction, uncertainty
Procedia PDF Downloads 29510003 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 93