Search results for: match filter (MF)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1277

Search results for: match filter (MF)

197 Design, Simulation and Construction of 2.4GHz Microstrip Patch Antenna for Improved Wi-Fi Reception

Authors: Gabriel Ugalahi, Dominic S. Nyitamen

Abstract:

This project seeks to improve Wi-Fi reception by utilizing the properties of directional microstrip patch antennae. Where there is a dense population of Wi-Fi signal, several signal sources transmitting on the same frequency band and indeed channel constitutes interference to each other. The time it takes for request to be received, resolved and response given between a user and the resource provider is increased considerably. By deploying a directional patch antenna with a narrow bandwidth, the range of frequency received is reduced and should help in limiting the reception of signal from unwanted sources. A rectangular microstrip patch antenna (RMPA) is designed to operate at the Industrial Scientific and Medical (ISM) band (2.4GHz) commonly used in Wi-Fi network deployment. The dimensions of the antenna are calculated and these dimensions are used to generate a model on Advanced Design System (ADS), a microwave simulator. Simulation results are then analyzed and necessary optimization is carried out to further enhance the radiation quality so as to achieve desired results. Impedance matching at 50Ω is also obtained by using the inset feed method. Final antenna dimensions obtained after simulation and optimization are then used to implement practical construction on an FR-4 double sided copper clad printed circuit board (PCB) through a chemical etching process using ferric chloride (Fe2Cl). Simulation results show an RMPA operating at a centre frequency of 2.4GHz with a bandwidth of 40MHz. A voltage standing wave ratio (VSWR) of 1.0725 is recorded on a return loss of -29.112dB at input port showing an appreciable match in impedance to a source of 50Ω. In addition, a gain of 3.23dBi and directivity of 6.4dBi is observed during far-field analysis. On deployment, signal reception from wireless devices is improved due to antenna gain. A test source with a received signal strength indication (RSSI) of -80dBm without antenna installed on the receiver was improved to an RSSI of -61dBm. In addition, the directional radiation property of the RMPA prioritizes signals by pointing in the direction of a preferred signal source thus, reducing interference from undesired signal sources. This was observed during testing as rotation of the antenna on its axis resulted to the gain of signal in-front of the patch and fading of signals away from the front.

Keywords: advanced design system (ADS), inset feed, received signal strength indicator (RSSI), rectangular microstrip patch antenna (RMPA), voltage standing wave ratio (VSWR), wireless fidelity (Wi-Fi)

Procedia PDF Downloads 210
196 Algorithm for Improved Tree Counting and Detection through Adaptive Machine Learning Approach with the Integration of Watershed Transformation and Local Maxima Analysis

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

The Philippines is long considered as a valuable producer of high value crops globally. The country’s employment and economy have been dependent on agriculture, thus increasing its demand for the efficient agricultural mechanism. Remote sensing and geographic information technology have proven to effectively provide applications for precision agriculture through image-processing technique considering the development of the aerial scanning technology in the country. Accurate information concerning the spatial correlation within the field is very important for precision farming of high value crops, especially. The availability of height information and high spatial resolution images obtained from aerial scanning together with the development of new image analysis methods are offering relevant influence to precision agriculture techniques and applications. In this study, an algorithm was developed and implemented to detect and count high value crops simultaneously through adaptive scaling of support vector machine (SVM) algorithm subjected to object-oriented approach combining watershed transformation and local maxima filter in enhancing tree counting and detection. The methodology is compared to cutting-edge template matching algorithm procedures to demonstrate its effectiveness on a demanding tree is counting recognition and delineation problem. Since common data and image processing techniques are utilized, thus can be easily implemented in production processes to cover large agricultural areas. The algorithm is tested on high value crops like Palm, Mango and Coconut located in Misamis Oriental, Philippines - showing a good performance in particular for young adult and adult trees, significantly 90% above. The s inventories or database updating, allowing for the reduction of field work and manual interpretation tasks.

Keywords: high value crop, LiDAR, OBIA, precision agriculture

Procedia PDF Downloads 392
195 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings

Authors: Abdulwakeel B. Raji

Abstract:

This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.

Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence

Procedia PDF Downloads 126
194 The Affordances and Challenges of Online Learning and Teaching for Secondary School Students

Authors: Hahido Samaras

Abstract:

In many cases, especially with the pandemic playing a major role in fast-tracking the growth of the digital industry, online learning has become a necessity or even a standard educational model nowadays, reliably overcoming barriers such as location, time and cost and frequently combined with a face-to-face format (e.g., in blended learning). This being the case, it is evident that students in many parts of the world, as well as their parents, will increasingly need to become aware of the pros and cons of online versus traditional courses. This fast-growing mode of learning, accelerated during the years of the pandemic, presents an abundance of exciting options especially matched for a large number of secondary school students in remote places of the world where access to stimulating educational settings and opportunities for a variety of learning alternatives are scarce, adding advantages such as flexibility, affordability, engagement, flow and personalization of the learning experience. However, online learning can also present several challenges, such as a lack of student motivation and social interactions in natural settings, digital literacy, and technical issues, to name a few. Therefore, educational researchers will need to conduct further studies focusing on the benefits and weaknesses of online learning vs. traditional learning, while instructional designers propose ways of enhancing student motivation and engagement in virtual environments. Similarly, teachers will be required to become more and more technology-capable, at the same time developing their knowledge about their students’ particular characteristics and needs so as to match them with the affordances the technology offers. And, of course, schools, education programs, and policymakers will have to invest in powerful tools and advanced courses for online instruction. By developing digital courses that incorporate intentional opportunities for community-building and interaction in the learning environment, as well as taking care to include built-in design principles and strategies that align learning outcomes with learning assignments, activities, and assessment practices, rewarding academic experiences can derive for all students. This paper raises various issues regarding the effectiveness of online learning on students by reviewing a large number of research studies related to the usefulness and impact of online learning following the COVID-19-induced digital education shift. It also discusses what students, teachers, decision-makers, and parents have reported about this mode of learning to date. Best practices are proposed for parties involved in the development of online learning materials, particularly for secondary school students, as there is a need for educators and developers to be increasingly concerned about the impact of virtual learning environments on student learning and wellbeing.

Keywords: blended learning, online learning, secondary schools, virtual environments

Procedia PDF Downloads 91
193 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 124
192 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 387
191 Modelling and Simulation of Hysteresis Current Controlled Single-Phase Grid-Connected Inverter

Authors: Evren Isen

Abstract:

In grid-connected renewable energy systems, input power is controlled by AC/DC converter or/and DC/DC converter depending on output voltage of input source. The power is injected to DC-link, and DC-link voltage is regulated by inverter controlling the grid current. Inverter performance is considerable in grid-connected renewable energy systems to meet the utility standards. In this paper, modelling and simulation of hysteresis current controlled single-phase grid-connected inverter that is utilized in renewable energy systems, such as wind and solar systems, are presented. 2 kW single-phase grid-connected inverter is simulated in Simulink and modeled in Matlab-m-file. The grid current synchronization is obtained by phase locked loop (PLL) technique in dq synchronous rotating frame. Although dq-PLL can be easily implemented in three-phase systems, there is difficulty to generate β component of grid voltage in single-phase system because single-phase grid voltage exists. Inverse-Park PLL with low-pass filter is used to generate β component for grid angle determination. As grid current is controlled by constant bandwidth hysteresis current control (HCC) technique, average switching frequency and variation of switching frequency in a fundamental period are considered. 3.56% total harmonic distortion value of grid current is achieved with 0.5 A bandwidth. Average value of switching frequency and total harmonic distortion curves for different hysteresis bandwidth are obtained from model in m-file. Average switching frequency is 25.6 kHz while switching frequency varies between 14 kHz-38 kHz in a fundamental period. The average and maximum frequency difference should be considered for selection of solid state switching device, and designing driver circuit. Steady-state and dynamic response performances of the inverter depending on the input power are presented with waveforms. The control algorithm regulates the DC-link voltage by adjusting the output power.

Keywords: grid-connected inverter, hysteresis current control, inverter modelling, single-phase inverter

Procedia PDF Downloads 469
190 The Strategic Gas Aggregator: A Key Legal Intervention in an Evolving Nigerian Natural Gas Sector

Authors: Olanrewaju Aladeitan, Obiageli Phina Anaghara-Uzor

Abstract:

Despite the abundance of natural gas deposits in Nigeria and the immense potential, this presents both for the domestic and export oriented revenue, there exists an imbalance in the preference for export as against the development and optimal utilization of natural gas for the domestic industry. Considerable amounts of gas are still being wasted by flaring in the country to this day. Although the government has set in place initiatives to harness gas at the flare and thereby reduce volumes flared, the gas producers would rather direct the gas produced to the export market whereas gas apportioned to the domestic market is often marred by the low domestic gas price which is often discouraging to the gas producers. The exported fraction of gas production no doubt yields healthy revenues for the government and an encouraging return on investment for the gas producers and for this reason export sales remain enticing and preferable to the domestic sale of gas. This export pull impacts negatively if left unchecked, on the domestic market which is in no position to match the price at the international markets. The issue of gas price remains critical to the optimal development of the domestic gas industry, in that it comprises the basis for investment decisions of the producers on the allocation of their scarce resources and to what project to channel their output in order to maximize profit. In order then to rebalance the domestic industry and streamline the market for gas, the Gas Aggregation Company of Nigeria, also known as the Strategic Aggregator was proposed under the Nigerian Gas Master Plan of 2008 and then established pursuant to the National Gas Supply and Pricing Regulations of 2008 to implement the domestic gas supply obligation which focuses on ramping-up gas volumes for domestic utilization by mandatorily requiring each gas producer to dedicate a portion of its gas production for domestic utilization before having recourse to the export market. The 2008 Regulations further stipulate penalties in the event of non-compliance. This study, in the main, assesses the adequacy of the legal framework for the Nigerian Gas Industry, given that the operational laws are structured more for oil than its gas counterpart; examine the legal basis for the Strategic Aggregator in the light of the Domestic Gas Supply and Pricing Policy 2008 and the National Domestic Gas Supply and Pricing Regulations 2008 and makes a case for a review of the pivotal role of the Aggregator in the Nigerian Gas market. In undertaking this assessment, the doctrinal research methodology was adopted. Findings from research conducted reveal the reawakening of the Federal Government to the immense potential of its gas industry as a critical sector of its economy and the need for a sustainable domestic natural gas market. A case for the review of the ownership structure of the Aggregator to comprise a balanced mix of the Federal Government, gas producers and other key stakeholders in order to ensure the effective implementation of the domestic supply obligations becomes all the more imperative.

Keywords: domestic supply obligations, natural gas, Nigerian gas sector, strategic gas aggregator

Procedia PDF Downloads 208
189 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: bacteria chromosome, bacterial identification, sequence, primer generation

Procedia PDF Downloads 184
188 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 391
187 One-Step Synthesis and Characterization of Biodegradable ‘Click-Able’ Polyester Polymer for Biomedical Applications

Authors: Wadha Alqahtani

Abstract:

In recent times, polymers have seen a great surge in interest in the field of medicine, particularly chemotherapeutics. One recent innovation is the conversion of polymeric materials into “polymeric nanoparticles”. These nanoparticles can be designed and modified to encapsulate and transport drugs selectively to cancer cells, minimizing collateral damage to surrounding healthy tissues, and improve patient quality of life. In this study, we have synthesized pseudo-branched polyester polymers from bio-based small molecules, including sorbitol, glutaric acid and a propargylic acid derivative to further modify the polymer to make it “click-able" with an azide-modified target ligand. Melt polymerization technique was used for this polymerization reaction, using lipase enzyme catalyst NOVO 435. This reaction was conducted between 90- 95 °C for 72 hours. The polymer samples were collected in 24-hour increments for characterization and to monitor reaction progress. The resulting polymer was purified with the help of methanol dissolving and filtering with filter paper then characterized via NMR, GPC, FTIR, DSC, TGA and MALDI-TOF. Following characterization, these polymers were converted to a polymeric nanoparticle drug delivery system using solvent diffusion method, wherein DiI optical dye and chemotherapeutic drug Taxol can be encapsulated simultaneously. The efficacy of the nanoparticle’s apoptotic effects were analyzed in-vitro by incubation with prostate cancer (LNCaP) and healthy (CHO) cells. MTT assays and fluorescence microscopy were used to assess the cellular uptake and viability of the cells after 24 hours at 37 °C and 5% CO2 atmosphere. Results of the assays and fluorescence imaging confirmed that the nanoparticles were successful in both selectively targeting and inducing apoptosis in 80% of the LNCaP cells within 24 hours without affecting the viability of the CHO cells. These results show the potential of using biodegradable polymers as a vehicle for receptor-specific drug delivery and a potential alternative for traditional systemic chemotherapy. Detailed experimental results will be discussed in the e-poster.

Keywords: chemotherapeutic drug, click chemistry, nanoparticle, prostat cancer

Procedia PDF Downloads 106
186 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 67
185 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 209
184 Automatic Furrow Detection for Precision Agriculture

Authors: Manpreet Kaur, Cheol-Hong Min

Abstract:

The increasing advancement in the robotics equipped with machine vision sensors applied to precision agriculture is a demanding solution for various problems in the agricultural farms. An important issue related with the machine vision system concerns crop row and weed detection. This paper proposes an automatic furrow detection system based on real-time processing for identifying crop rows in maize fields in the presence of weed. This vision system is designed to be installed on the farming vehicles, that is, submitted to gyros, vibration and other undesired movements. The images are captured under image perspective, being affected by above undesired effects. The goal is to identify crop rows for vehicle navigation which includes weed removal, where weeds are identified as plants outside the crop rows. The images quality is affected by different lighting conditions and gaps along the crop rows due to lack of germination and wrong plantation. The proposed image processing method consists of four different processes. First, image segmentation based on HSV (Hue, Saturation, Value) decision tree. The proposed algorithm used HSV color space to discriminate crops, weeds and soil. The region of interest is defined by filtering each of the HSV channels between maximum and minimum threshold values. Then the noises in the images were eliminated by the means of hybrid median filter. Further, mathematical morphological processes, i.e., erosion to remove smaller objects followed by dilation to gradually enlarge the boundaries of regions of foreground pixels was applied. It enhances the image contrast. To accurately detect the position of crop rows, the region of interest is defined by creating a binary mask. The edge detection and Hough transform were applied to detect lines represented in polar coordinates and furrow directions as accumulations on the angle axis in the Hough space. The experimental results show that the method is effective.

Keywords: furrow detection, morphological, HSV, Hough transform

Procedia PDF Downloads 226
183 Choking among Babies, Toddlers and Children with Special Needs: A Review of Mechanisms, Implications, Incidence, and Recommendations of Professional Prevention Guidelines

Authors: Ella Abaev, Shany Segal, Miri Gabay

Abstract:

Background: Choking is a blockage of airways that prevents efficient breathing and air flow to the lungs. Choking may be partial or full and is an emergency situation. Complete or prolonged choking leads to apnea, lack of oxygen in the tissues of the body and brain, and can cause death. There are three mechanisms of choking: obstruction of internal respiratory tracts by food or object aspiration, any material that blocks or covers external air passages, external pressure on the neck or trapping between objects. Children's airways are narrower than that of adults and therefore the risk of choking is greater, due to the aspiration of food and other foreign bodies into the lungs. In the Child Development Center at Safra Children’s Hospital, Tel Hashomer in Israel are treated infants, toddlers, and children aged 0-18 years with various developmental disabilities. Due to the increase in reports of ‘almost an event’ of choking in the past year and the serious consequences of choking event, it was decided to give an emphasis to the issue. Incidence and methods: The number of reports of ‘almost an event’ or a choking event was examined at the center during the years 2013-2018 and a thorough research work was conducted on the subject in order to build a prevention program. Findings: Between 2013 and 2018 the center reported about ten cases of ‘almost choking events’. In the middle of 2018 alone three cases of ‘almost an event’ were reported. Objective: Providing knowledge leads to awareness raise, change of perception, change in behavior and prevention. The center employs more than 130 staff members from various sectors so that it is the work of multi-professional teams to promote the quality and safety of the treatment. The familiarity of the staff with risk factors, prevention guidelines, identification of choking signs, and treatment are most important and significant in determining the outcome of a choking event. Conclusions and recommendations: After in-depth research work was carried out in cooperation with the Risk Management Unit on the subject of choking, which include a description of the definitions, mechanisms, risk factors, treatment methods and extensive recommendations for prevention (e.g. using treatment and stimulation accessories with standards association stamps and adjustment of the type of food and the way it is served to match to the child's age and the ability to swallow). The expected stages of development and emphasis on the population of children with special needs were taken into account. The research findings will be published by the staff and parents of the patients, professional publications, and lectures and there is an expectation to decrease the number of choking events in the next years.

Keywords: children with special needs, choking, educational system, prevention guidelines

Procedia PDF Downloads 163
182 Effect of Vitrification on Embryos Euploidy Obtained from Thawed Oocytes

Authors: Natalia Buderatskaya, Igor Ilyin, Julia Gontar, Sergey Lavrynenko, Olga Parnitskaya, Ekaterina Ilyina, Eduard Kapustin, Yana Lakhno

Abstract:

Introduction: It is known that cryopreservation of oocytes has peculiar features due to the complex structure of the oocyte. One of the most important features is that mature oocytes contain meiotic division spindle which is very sensitive even to the slightest variation in temperature. Thus, the main objective of this study is to analyse the resulting euploid embryos obtained from thawed oocytes in comparison with the data of preimplantation genetic screening (PGS) in fresh embryo cycles. Material and Methods: The study was conducted at 'Medical Centre IGR' from January to July 2016. Data were analysed for 908 donor oocytes obtained in 67 cycles of assisted reproductive technologies (ART), of which 693 oocytes were used in the 51 'fresh' cycles (group A), and 215 oocytes - 16 ART programs with vitrification female gametes (group B). The average age of donors in the groups match 27.3±2.9 and 27.8±6.6 years. Stimulation of superovulation was conducted the standard way. Vitrification was performed in 1-2 hours after transvaginal puncture and thawing of oocytes were carried out in accordance with the standard protocol of Cryotech (Japan). Manipulation ICSI was performed 4-5 hours after transvaginal follicle puncture for fresh oocytes, or after defrosting - for vitrified female gametes. For the PGS, an embryonic biopsy was done on the third or on the fifth day after fertilization. Diagnostic procedures were performed using fluorescence in situ hybridization with the study of such chromosomes as 13, 16, 18, 21, 22, X, Y. Only morphologically quality blastocysts were used for the transfer, the estimation of which corresponded to the Gardner criteria. The statistical hypotheses were done using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: The mean number of mature oocytes per cycle in group A was 13.58±6.65 and in group B - 13.44±6.68 oocytes for patient. The survival of oocytes after thawing totaled 95.3% (n=205), which indicates a highly effective quality of performed vitrification. The proportion of zygotes in the group A corresponded to 91.1%(n=631), in the group B – 80.5%(n=165), which shows statistically significant difference between the groups (p<0.001) and explained by non-viable oocytes elimination after vitrification. This is confirmed by the fact that on the fifth day of embryos development a statistically significant difference in the number of blastocysts was absent (p>0.05), and constituted respectively 61.6%(n=389) and 63.0%(n=104) in the groups. For the PGS performing 250 embryos analyzed in the group A and 72 embryos - in the group B. The results showed that euploidy in the studied chromosomes were 40.0%(n=100) embryos in the group A and 41.7% (n=30) - in the group B, which shows no statistical significant difference (p>0.05). The indicators of clinical pregnancies in the groups amounted to 64.7% (22 pregnancies per 34 embryo transfers) and 61.5% (8 pregnancies per 13 embryo transfers) respectively, and also had no significant difference between the groups (p>0.05). Conclusions: The results showed that the vitrification does not affect the resulting euploid embryos in assisted reproductive technologies and are not reflected in their morphological characteristics in ART programs.

Keywords: euploid embryos, preimplantation genetic screening, thawing oocytes, vitrification

Procedia PDF Downloads 320
181 AI-Based Information System for Hygiene and Safety Management of Shared Kitchens

Authors: Jongtae Rhee, Sangkwon Han, Seungbin Ji, Junhyeong Park, Byeonghun Kim, Taekyung Kim, Byeonghyeon Jeon, Jiwoo Yang

Abstract:

The shared kitchen is a concept that transfers the value of the sharing economy to the kitchen. It is a type of kitchen equipped with cooking facilities that allows multiple companies or chefs to share time and space and use it jointly. These shared kitchens provide economic benefits and convenience, such as reduced investment costs and rent, but also increase the risk of safety management, such as cross-contamination of food ingredients. Therefore, to manage the safety of food ingredients and finished products in a shared kitchen where several entities jointly use the kitchen and handle various types of food ingredients, it is critical to manage followings: the freshness of food ingredients, user hygiene and safety and cross-contamination of cooking equipment and facilities. In this study, it propose a machine learning-based system for hygiene safety and cross-contamination management, which are highly difficult to manage. User clothing management and user access management, which are most relevant to the hygiene and safety of shared kitchens, are solved through machine learning-based methodology, and cutting board usage management, which is most relevant to cross-contamination management, is implemented as an integrated safety management system based on artificial intelligence. First, to prevent cross-contamination of food ingredients, we use images collected through a real-time camera to determine whether the food ingredients match a given cutting board based on a real-time object detection model, YOLO v7. To manage the hygiene of user clothing, we use a camera-based facial recognition model to recognize the user, and real-time object detection model to determine whether a sanitary hat and mask are worn. In addition, to manage access for users qualified to enter the shared kitchen, we utilize machine learning based signature recognition module. By comparing the pairwise distance between the contract signature and the signature at the time of entrance to the shared kitchen, access permission is determined through a pre-trained signature verification model. These machine learning-based safety management tasks are integrated into a single information system, and each result is managed in an integrated database. Through this, users are warned of safety dangers through the tablet PC installed in the shared kitchen, and managers can track the cause of the sanitary and safety accidents. As a result of system integration analysis, real-time safety management services can be continuously provided by artificial intelligence, and machine learning-based methodologies are used for integrated safety management of shared kitchens that allows dynamic contracts among various users. By solving this problem, we were able to secure the feasibility and safety of the shared kitchen business.

Keywords: artificial intelligence, food safety, information system, safety management, shared kitchen

Procedia PDF Downloads 56
180 Carbonaceous Monolithic Multi-Channel Denuders as a Gas-Particle Partitioning Tool for the Occupational Sampling of Aerosols from Semi-Volatile Organic Compounds

Authors: Vesta Kohlmeier, George C. Dragan, Juergen Orasche, Juergen Schnelle-Kreis, Dietmar Breuer, Ralf Zimmermann

Abstract:

Aerosols from hazardous semi-volatile organic compounds (SVOC) may occur in workplace air and can simultaneously be found as particle and gas phase. For health risk assessment, it is necessary to collect particles and gases separately. This can be achieved by using a denuder for the gas phase collection, combined with a filter and an adsorber for particle collection. The study focused on the suitability of carbonaceous monolithic multi-channel denuders, so-called Novacarb™-Denuders (MastCarbon International Ltd., Guilford, UK), to achieve gas-particle separation. Particle transmission efficiency experiments were performed with polystyrene latex (PSL) particles (size range 0.51-3 µm), while the time dependent gas phase collection efficiency was analysed for polar and nonpolar SVOC (mass concentrations 7-10 mg/m3) over 2 h at 5 or 10 l/min. The experimental gas phase collection efficiency was also compared with theoretical predictions. For n-hexadecane (C16), the gas phase collection efficiency was max. 91 % for one denuder and max. 98 % for two denuders, while for diethylene glycol (DEG), a maximal gas phase collection efficiency of 93 % for one denuder and 97 % for two denuders was observed. At 5 l/min higher gas phase collection efficiencies were achieved than at 10 l/min. The deviations between the theoretical and experimental gas phase collection efficiencies were up to 5 % for C16 and 23 % for DEG. Since the theoretical efficiency depends on the geometric shape and length of the denuder, flow rate and diffusion coefficients of the tested substances, the obtained values define an upper limit which could be reached. Regarding the particle transmission through the denuders, the use of one denuder showed transmission efficiencies around 98 % for 1-3 µm particle diameters. The use of three denuders resulted in transmission efficiencies from 93-97 % for the same particle sizes. In summary, NovaCarb™-Denuders are well applicable for sampling aerosols of polar/nonpolar substances with particle diameters ≤3 µm and flow rates of 5 l/min or lower. These properties and their compact size make them suitable for use in personal aerosol samplers. This work is supported by the German Social Accident Insurance (DGUV), research contract FP371.

Keywords: gas phase collection efficiency, particle transmission, personal aerosol sampler, SVOC

Procedia PDF Downloads 162
179 Performance of Different Spray Nozzles in the Application of Defoliant on Cotton Plants (Gossypium hirsutum L.)

Authors: Mohamud Ali Ibrahim, Ali Bayat, Ali Bolat

Abstract:

Defoliant spraying is an important link in the mechanized cotton harvest because adequate and uniform spraying can improve defoliation quality and reduce cotton trash content. In defoliant application, application volume and spraying technology are extremely important. In this study, the effectiveness of defoliant application to cotton plant that has come to harvest with two different application volumes and three different types of nozzles with a standard field crop sprayer was determined. Experiments were carried in two phases as field area trials and laboratory analysis. Application rates were 250 l/ha and 400 L/ha, and spraying nozzles were (1) Standard flat fan nozzle (TP8006), (2) Air induction nozzle (AI 11002-VS), and (3) Dual Pattern nozzle (AI307003VP). A tracer (BSF) and defoliant were applied to mature cotton with approximately 60% open bolls and samplings for BSF deposition and spray coverage on the cotton plant were done at two plant height (upper layer, lower layer) of plant. Before and after spraying, bolls open and leaves rate on cotton plants were calculated, and filter papers were used to detect BSF deposition, and water sensitive papers (WSP) were used to measure the coverage rate of spraying methods used. Spectrofluorophotometer was used to detect the amount of tracer deposition on targets, and an image process computer programme was used to measure coverage rate on WSP. In analysis, conclusions showed that air induction nozzle (AI 11002-VS) achieved better results than the dual pattern and standard flat fan nozzles in terms of higher depositions, coverages, and leaf defoliations, and boll opening rates. AI nozzles operating at 250 L/ha application rate provide the highest deposition and coverage rate on applications of the defoliant; in addition, BSF as an indicator of the defoliant used reached on leaf beneath in merely this spray nozzle. After defoliation boll opening rate was 85% on the 7th and 12th days after spraying and falling rate of leaves was 76% at application rate of 250 L/ha with air induction (AI1102) nozzle.

Keywords: cotton defoliant, air induction nozzle, dual pattern nozzle, standard flat fan nozzle, coverage rate, spray deposition, boll opening rate, leaves falling rate

Procedia PDF Downloads 178
178 Effect of Electromagnetic Fields at 27 GHz on Sperm Quality of Mytilus galloprovincialis

Authors: Carmen Sica, Elena M. Scalisi, Sara Ignoto, Ludovica Palmeri, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo

Abstract:

Recently, a rise in the use of wireless internet technologies such as Wi-Fi and 5G routers/modems have been demonstrated. These devices emit a considerable amount of electromagnetic radiation (EMR), which could interact with the male reproductive system either by thermal or non-thermal mechanisms. The aim of this study was to investigate the direct in vitro influence of 5G radiation on sperm quality in Mytilus galloprovincialis, considered an excellent model for reproduction studies. The experiments at 27 GHz were conducted by using a no commercial high gain pyramidal horn antenna. To evaluate the specific absorption rate (SAR), a numerical simulation has been performed. The resulting incident power density was significantly lower than the power density limit of 10 mW/cm2 set by the international guidelines as a limit for nonthermal effects above 6 GHz. However, regarding temperature measurements of the aqueous sample, it has been verified an increase of 0.2°C, compared to the control samples. This very low-temperature increase couldn’t interfere with experiments. For experiments, sperm samples taken from sexually mature males of Mytilus galloprovincialis were placed in artificial seawater, salinity 30 + 1% and pH 8.3 filtered with a 0.2 m filter. After evaluating the number and quality of spermatozoa, sperm cells were exposed to electromagnetic fields a 27GHz. The effect of exposure on sperm motility and quality was evaluated after 10, 20, 30 and 40 minutes with a light microscope and also using the Eosin test to verify the vitality of the gametes. All the samples were performed in triplicate and statistical analysis was carried out using one-way analysis of variance (ANOVA) with Turkey test for multiple comparations of means to determine differences of sperm motility. A significant decrease (30%) in sperm motility was observed after 10 minutes of exposure and after 30 minutes, all sperms were immobile and not vital. Due to little literature data about this topic, these results could be useful for further studies concerning a great diffusion of these new technologies.

Keywords: mussel, spermatozoa, sperm motility, millimeter waves

Procedia PDF Downloads 152
177 Backwash Optimization for Drinking Water Treatment Biological Filters

Authors: Sarra K. Ikhlef, Onita Basu

Abstract:

Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.

Keywords: biological filtration, backwashing, collapse pulsing, ETSW

Procedia PDF Downloads 266
176 Rest Behavior and Restoration: Searching for Patterns through a Textual Analysis

Authors: Sandra Christina Gressler

Abstract:

Resting is essentially the physical and mental relaxation. So, can behaviors that go beyond the merely physical relaxation to some extent be understood as a behavior of restoration? Studies on restorative environments emphasize the physical, mental and social benefits that some environments can provide and suggest that activities in natural environments reduce the stress of daily lives, promoting recovery against the daily wear. These studies, though specific in their results, do not unify the different possibilities of restoration. Considering the importance of restorative environments by promoting well-being, this research aims to verify the applicability of the theory on restorative environments in a Brazilian context, inquiring about the environment/behavior of rest. The research sought to achieve its goals by; a) identifying daily ways of how participants interact/connect with nature; b) identifying the resting environments/behavior; c) verifying if rest strategies match the restorative environments suggested by restorative studies; and d) verifying different rest strategies related to time. Workers from different companies in which certain functions require focused attention, and high school students from different schools, participated in this study. An interview was used to collect data and information. The data obtained were compared with studies of attention restoration theory and stress recovery. The collected data were analyzed through the basic descriptive inductive statistics and the use of the software ALCESTE® (Analyse Lexicale par Contexte d'un Ensemble de Segments de Texte). The open questions investigate perception of nature on a daily basis – analysis using ALCESTE; rest periods – daily, weekends and holidays – analysis using ALCESTE with tri-croisé; and resting environments and activities – analysis using a simple descriptive statistics. According to the results, environments with natural characteristics that are compatible with personal desires (physical aspects and distance) and residential environments when they fulfill the characteristics of refuge, safety, and self-expression, characteristics of primary territory, meet the requirements of restoration. Analyzes suggest that the perception of nature has a wide range that goes beyond the objects nearby and possible to be touched, as well as observation and contemplation of details. The restoration processes described in the studies of attention restoration theory occur gradually (hierarchically), starting with being away, following compatibility, fascination, and extent. They are also associated with the time that is available for rest. The relation between rest behaviors and the bio-demographic characteristics of the participants are noted. It reinforces, in studies of restoration, the need to insert not only investigations regarding the physical characteristics of the environment but also behavior, social relationship, subjective reactions, distance and time available. The complexity of the theme indicates the necessity for multimethod studies. Practical contributions provide subsidies for developing strategies to promote the welfare of the population.

Keywords: attention restoration theory, environmental psychology, rest behavior, restorative environments

Procedia PDF Downloads 179
175 Understanding How to Increase Restorativeness of Interiors: A Qualitative Exploratory Study on Attention Restoration Theory in Relation to Interior Design

Authors: Hande Burcu Deniz

Abstract:

People in the U.S. spend a considerable portion of their time indoors. This makes it crucial to provide environments that support the well-being of people. Restorative environments aim to help people recover their cognitive resources that were spent due to intensive use of directed attention. Spending time in nature and taking a nap are two of the best ways to restore these resources. However, they are not possible to do most of the time. The problem is that many studies have revealed how nature and spending time in natural contexts can help boost restoration, but there are fewer studies conducted to understand how cognitive resources can be restored in interior settings. This study aims to explore the answer to this question: which qualities of interiors increase the restorativeness of an interior setting and how do they mediate restorativeness of an interior. To do this, a phenomenological qualitative study was conducted. The study was interested in the definition of attention restoration and the experiences of the phenomena. As the themes emerged, they were analyzed to match with Attention Restoration Theory components (being away, extent, fascination, compatibility) to examine how interior design elements mediate the restorativeness of an interior. The data was gathered from semi-structured interviews with international residents of Minnesota. The interviewees represent young professionals who work in Minnesota and often experience mental fatigue. Also, they have less emotional connections with places in Minnesota, which enabled data to be based on the physical qualities of a space rather than emotional connections. In the interviews, participants were asked about where they prefer to be when they experience mental fatigue. Next, they were asked to describe the physical qualities of the places they prefer to be with reasons. Four themes were derived from the analysis of interviews. The themes are in order according to their frequency. The first, and most common, the theme was “connection to outside”. The analysis showed that people need to be either physically or visually connected to recover from mental fatigue. Direct connection to nature was reported as preferable, whereas urban settings were the secondary preference along with interiors. The second theme emerged from the analysis was “the presence of the artwork,” which was experienced differently by the interviewees. The third theme was “amenities”. Interviews pointed out that people prefer to have the amenities that support desired activity during recovery from mental fatigue. The last theme was “aesthetics.” Interviewees stated that they prefer places that are pleasing to their eyes. Additionally, they could not get rid of the feeling of being worn out in places that are not well-designed. When we matched the themes with the four art components (being away, extent, fascination, compatibility), some of the interior qualities showed overlapping since they were experienced differently by the interviewees. In conclusion, this study showed that interior settings have restorative potential, and they are multidimensional in their experience.

Keywords: attention restoration, fatigue, interior design, qualitative study, restorative environments

Procedia PDF Downloads 244
174 Spatial Structure of First-Order Voronoi for the Future of Roundabout Cairo Since 1867

Authors: Ali Essam El Shazly

Abstract:

The Haussmannization plan of Cairo in 1867 formed a regular network of roundabout spaces, though deteriorated at present. The method of identifying the spatial structure of roundabout Cairo for conservation matches the voronoi diagram with the space syntax through their geometrical property of spatial convexity. In this initiative, the primary convex hull of first-order voronoi adopts the integral and control measurements of space syntax on Cairo’s roundabout generators. The functional essence of royal palaces optimizes the roundabout structure in terms of spatial measurements and the symbolic voronoi projection of 'Tahrir Roundabout' over the Giza Nile and Pyramids. Some roundabouts of major public and commercial landmarks surround the pole of 'Ezbekia Garden' with a higher control than integral measurements, which filter the new spatial structure from the adjacent traditional town. Nevertheless, the least integral and control measures correspond to the voronoi contents of pollutant workshops and the plateau of old Cairo Citadel with the visual compensation of new royal landmarks on top. Meanwhile, the extended suburbs of infinite voronoi polygons arrange high control generators of chateaux housing in 'garden city' environs. The point pattern of roundabouts determines the geometrical characteristics of voronoi polygons. The measured lengths of voronoi edges alternate between the zoned short range at the new poles of Cairo and the distributed structure of longer range. Nevertheless, the shortest range of generator-vertex geometry concentrates at 'Ezbekia Garden' where the crossways of vast Cairo intersect, which maximizes the variety of choice at different spatial resolutions. However, the symbolic 'Hippodrome' which is the largest public landmark forms exclusive geometrical measurements, while structuring a most integrative roundabout to parallel the royal syntax. Overview of the symbolic convex hull of voronoi with space syntax interconnects Parisian Cairo with the spatial chronology of scattered monuments to conceive one universal Cairo structure. Accordingly, the approached methodology of 'voronoi-syntax' prospects the future conservation of roundabout Cairo at the inferred city-level concept.

Keywords: roundabout Cairo, first-order Voronoi, space syntax, spatial structure

Procedia PDF Downloads 494
173 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.

Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification

Procedia PDF Downloads 229
172 Stress, Anxiety and Its Associated Factors Within the Transgender Population of Delhi: A Cross-Sectional Study

Authors: Annie Singh, Ishaan Singh

Abstract:

Background: Transgenders are people who have a gender identity different from their sex assigned at birth. Their gender behaviour doesn’t match their body anatomy. The community faces discrimination due to their gender identity all across the world. The term transgender is an umbrella term for many people non-conformal to their biological identity; note that the term transgender is different from gender dysphoria, which is a DSM-5 disorder defined as problems faced by an individual due to their non-conforming gender identity. Transgender people have been a part of Indian culture for ages yet have continued to face exclusion and discrimination in society. This has led to the low socio-economic status of the community. Various studies done across the world have established the role of discrimination, harassment and exclusion in the development of psychological disorders. The study is aimed to assess the frequency of stress and anxiety in the transgender population and understand the various factors affecting the same. Methodology: A cross-sectional survey of self consenting transgender individuals above the age of 18 residing in Delhi was done to assess their socioeconomic status and experiential ecology. Recruitment of participants was done with the help of NGOs. The survey was constructed GAD-7 and PSS-10, two well-known scales were used to assess the stress and anxiety levels. Medians, means and ranges are used for reporting continuous data wherever required, while frequencies and percentages are used for categorical data. For associations and comparison between groups in categorical data, the Chi-square test was used, while the Kruskal-Wallis H test was employed for associations involving multiple ordinal groups. SPSS v28.0 was used to perform the statistical analysis for this study. Results: The survey showed that the frequency of stress and anxiety is high in the transgender population. A demographic survey indicates a low socio-economic background. 44% of participants reported facing discrimination on a daily basis; the frequency of discrimination is higher in transwomen than in transmen. Stress and anxiety levels are similar among both transmen and transwomen. Only 34.5% of participants said they had receptive family or friends. The majority of participants (72.7%) reported a positive or neutral experience with healthcare workers. The prevalence of discrimination is significantly lower in the higher educated groups. Analysis of data shows a positive impact of acceptance and reception on mental health, while discrimination is correlated with higher levels of stress and anxiety. Conclusion: The prevalence of widespread transphobia and discrimination faced by the transgender community has culminated in high levels of stress and anxiety in the transgender population and shows variance according to multiple socio-demographic factors. Educating people about the LGBT community formation of support groups, policies and laws are required to establish trust and promote integration.

Keywords: transgender, gender, stress, anxiety, mental health, discrimination, exclusion

Procedia PDF Downloads 104
171 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 152
170 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers

Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin

Abstract:

Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.

Keywords: anxiety, emotional valence, childhood, lexical access

Procedia PDF Downloads 281
169 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 243
168 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 95