Search results for: instrumental variable estimation
2841 Transverse Vibration of Non-Homogeneous Rectangular Plates of Variable Thickness Using GDQ
Abstract:
The effect of non-homogeneity on the free transverse vibration of thin rectangular plates of bilinearly varying thickness has been analyzed using generalized differential quadrature (GDQ) method. The non-homogeneity of the plate material is assumed to arise due to linear variations in Young’s modulus and density of the plate material with the in-plane coordinates x and y. Numerical results have been computed for fully clamped and fully simply supported boundary conditions. The solution procedure by means of GDQ method has been implemented in a MATLAB code. The effect of various plate parameters has been investigated for the first three modes of vibration. A comparison of results with those available in literature has been presented.Keywords: rectangular, non-homogeneous, bilinear thickness, generalized differential quadrature (GDQ)
Procedia PDF Downloads 3872840 Spatial Evaluations of Haskoy: The Emperial Village
Authors: Yasemin Filiz-Kuruel, Emine Koseoglu
Abstract:
This study aims to evaluate Haskoy district of Beyoglu town of Istanbul. Haskoy is located in Halic region, between Kasimpasa district and Kagithane district. After the conquest of Istanbul, Fatih Sultan Mehmet (the Conqueror) set up his tent here. Therefore, the area gets its name as Haskoy, 'imperial village' that means a village which is special for Sultan. Today, there are shipyard and ateliers in variable sizes in Haskoy. In this study, the legibility of Haskoy streets is investigated comparatively. As a research method, semantic differential scale is used. The photos of the streets, which contain specific criteria, are chosen. The questionnaire is directed to first and third grade architecture students. The spatial evaluation of Haskoy streets is done through the survey.Keywords: Haskoy, legibility, semantic differential scale, urban streets
Procedia PDF Downloads 5682839 Comparative Analysis of Classical and Parallel Inpainting Algorithms Based on Affine Combinations of Projections on Convex Sets
Authors: Irina Maria Artinescu, Costin Radu Boldea, Eduard-Ionut Matei
Abstract:
The paper is a comparative study of two classical variants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time in classical CPU and, alternatively, in parallel GPU implementation.Keywords: convex feasibility problem, convergence analysis, inpainting, parallel projection methods
Procedia PDF Downloads 1762838 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets
Authors: O. Poleshchuk, E. Komarov
Abstract:
This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.Keywords: interval type-2 fuzzy sets, fuzzy regression, weighted interval
Procedia PDF Downloads 3762837 Block Matching Based Stereo Correspondence for Depth Calculation
Authors: G. Balakrishnan
Abstract:
Stereo Correspondence plays a major role in estimation of distance of an object from the stereo camera pair for various applications. In this paper, a stereo correspondence algorithm based on block-matching technique is presented. Initially, an energy matrix is calculated for every disparity obtained using modified Sum of Absolute Difference (SAD). Higher energy matrix errors are removed by using threshold value in order to reduce the mismatch errors. A smoothening filter is applied to eliminate unreliable disparity estimate across the object boundaries. The purpose is to improve the reliability of calculation of disparity map. The experimental results obtained shows that the final depth map produce better results and can be used to all the applications using stereo cameras.Keywords: stereo matching, filters, energy matrix, disparity
Procedia PDF Downloads 2162836 Detecting Potential Biomarkers for Ulcerative Colitis Using Hybrid Feature Selection
Authors: Mustafa Alshawaqfeh, Bilal Wajidy, Echin Serpedin, Jan Suchodolski
Abstract:
Inflammatory Bowel disease (IBD) is a disease of the colon with characteristic inflammation. Clinically IBD is detected using laboratory tests (blood and stool), radiology tests (imaging using CT, MRI), capsule endoscopy and endoscopy. There are two variants of IBD referred to as Ulcerative Colitis (UC) and Crohn’s disease. This study employs a hybrid feature selection method that combines a correlation-based variable ranking approach with exhaustive search wrapper methods in order to find potential biomarkers for UC. The proposed biomarkers presented accurate discriminatory power thereby identifying themselves to be possible ingredients to UC therapeutics.Keywords: ulcerative colitis, biomarker detection, feature selection, inflammatory bowel disease (IBD)
Procedia PDF Downloads 4042835 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 892834 The Dialectic between Effectiveness and Humanity in the Era of Open Knowledge from the Perspective of Pedagogy
Authors: Sophia Ming Lee Wen, Chao-Ching Kuo, Yu-Line Hu, Yu-Lung Ho, Chih-Cheng Huang, Yi-Hwa Lee
Abstract:
Teaching and learning should involve social issues by which effectiveness and humanity is due consideration as a guideline for sharing and co-creating knowledge. A qualitative method was used after a pioneer study to confirm pre-service teachers’ awareness of open knowledge. There are 17 in-service teacher candidates sampling from 181 schools in Taiwan. Two questions are to resolve: a) How did teachers change their educational ideas, in particular, their attitudes to meet the needs of knowledge sharing and co-creativity; and b) How did they acknowledge the necessity of working out an appropriate way between the educational efficiency and the nature of education for high performance management. This interview investigated teachers’ attitude of sharing and co-creating knowledge. The results show two facts in Taiwan: A) Individuals who must be able to express themselves will be capable of taking part in an open learning environment; and B) Teachers must lead the direction to inspire high performance and improve students’ capacity via knowledge sharing and co-creating knowledge, according to the student-centered philosophy. Collected data from interviewing showed that the teachers were well aware of changing their teaching methods and make some improvements to balance the educational efficiency and the nature of education. Almost all teachers acknowledge that ICT is helpful to motivate learning enthusiasm. Further, teaching integrated with ICT saves teachers’ time and energy on teaching preparation and promoting effectiveness. Teachers are willing to co-create knowledge with students, though using information is not easy due to the lack of operating skills of the website and ICT. Some teachers are against to co-create knowledge in the informational background since they hold that is not feasible for there being a knowledge gap between teachers and students. Technology would easily mislead teachers and students to the goal of instrumental rationality, which makes pedagogy dysfunctional and inhumane; however, any high quality of teaching should take a dialectical balance between effectiveness and humanity.Keywords: critical thinking, dialectic between effectiveness and humanity, open knowledge, pedagogy
Procedia PDF Downloads 3582833 Estimation of Sediment Transport into a Reservoir Dam
Authors: Kiyoumars Roushangar, Saeid Sadaghian
Abstract:
Although accurate sediment load prediction is very important in planning, designing, operating and maintenance of water resources structures, the transport mechanism is complex, and the deterministic transport models are based on simplifying assumptions often lead to large prediction errors. In this research, firstly, two intelligent ANN methods, Radial Basis and General Regression Neural Networks, are adopted to model of total sediment load transport into Madani Dam reservoir (north of Iran) using the measured data and then applicability of the sediment transport methods developed by Engelund and Hansen, Ackers and White, Yang, and Toffaleti for predicting of sediment load discharge are evaluated. Based on comparison of the results, it is found that the GRNN model gives better estimates than the sediment rating curve and mentioned classic methods.Keywords: sediment transport, dam reservoir, RBF, GRNN, prediction
Procedia PDF Downloads 4992832 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 702831 Trace Elements in Yerba Mate from Brazil and Argentina by Inductively Coupled Plasma Mass Spectrometry
Authors: F. V. Matta, C. M. Donnelly, M. B. Jaafar, N. I. Ward
Abstract:
‘Yerba Mate’ (Ilex paraguariensis) is a native plant from South America with the main producers being Argentina and Brazil. ‘Mate’ is widely consumed in Argentina, Brazil, Uruguay and Paraguay. The most popular format is as an infusion made from dried leaves of a traditional cup, roasted material in tea bags or iced tea infusions. There are many alleged health benefits resulted from mate consumption, even though there is a lack of conclusive research published in the international literature. The main objective of this study was to develop and evaluate the sample preparation and instrumental analysis stages involved in the determination of trace elements in yerba mate using inductively coupled plasma mass spectrometry (ICP-MS). Specific details on the methods of sample digestion, validation of the ICP-MS analysis especially for polyatomic ion correction and matrix effects associated with the complex medium of mate will be presented. More importantly, mate produced in Brazil and Argentina, is subject to different soil conditions, methods of cultivation and production, especially for loose leaves and tea bags. The highest concentrations for loose mate leaf were for (mg/kg, dry weight): aluminium (253.6 – 506.9 for Brazil (Bra), 230.0 – 541.8 for Argentina (Arg), respectively), manganese (378.3 – 762.6 Bra; 440.8 – 879.9 Arg), iron (32.5 – 85.7 Bra; 28.2 – 132.9 Arg), zinc (28.2 – 91.1 Bra; 39.1 – 92.3 Arg), nickel (2.2 – 4.3 Bra; 2.9 – 10.8 Arg) and copper (4.8 – 9.1 Bra; 4.3 – 9.2 Arg), with lower levels of chromium, cobalt, selenium, molybdenum, cadmium, lead and arsenic. Elemental levels of mate leaf consumed in tea bags were found to be higher, mainly due to only using leaf material (as opposed to leaf and twig for loose packed product). Further implications of the way of consuming yerba mate will be presented, including different infusion methods in Brazil and Argentina. This research provides for the first time an extensive evaluation of mate products from both countries and the possible implications of specific trace elements, especially Mn, Fe, Se, Cu and Zn and the various health claims of consuming yerba mate.Keywords: beverage analysis, ICP-MS, trace elements, yerba mate
Procedia PDF Downloads 2322830 Statistical Estimation of Ionospheric Energy Dissipation Using ØStgaard's Empirical Relation
Authors: M. A. Ahmadu, S. S. Rabia
Abstract:
During the past few decades, energy dissipation in the ionosphere resulting from the geomagnetic activity has caused an increasing number of major disruptions of important power and communication services, malfunctions and loss of expensive facilities. Here, the electron precipitation energy, w(ep) and joule heating energy, w(jh) was used in the computation of this dissipation using Østgaard’s empirical relation from hourly geomagnetic indices of 2012, under the assumption that the magnetosphere does not store any energy, so that at the beginning of the activity t1=0 and end at t2=t, the statistical results obtained show that ionospheric dissipation varies month to month, day to day and hour to hour and estimated with a value ~3.6 w(ep), which is in agreement with experimental result.Keywords: Ostgaard's, ionospheric dissipation, joule heating, electron precipitation, geomagnetic indices, empirical relation
Procedia PDF Downloads 2972829 Computation of Induction Currents in a Set of Dendrites
Authors: R. B. Mishra, Sudhakar Tripathi
Abstract:
In this paper, the cable model of dendrites have been considered. The dendrites are cylindrical cables of various segments having variable length and reducing radius from start point at synapse and end points. For a particular event signal being received by a neuron in response only some dendrite are active at a particular instance. Initial current signals with different current flows in dendrite are assumed. Due to overlapping and coupling of active dendrite, they induce currents in the dendrite segments of each other at a particular instance. But how these currents are induced in the various segments of active dendrites due to coupling between these dendrites, It is not presented in the literature. Here the paper presents a model for induced currents in active dendrite segments due to mutual coupling at the starting instance of an activity in dendrite. The model is as discussed further.Keywords: currents, dendrites, induction, simulation
Procedia PDF Downloads 3952828 Biomechanical Perspectives on the Urinary Bladder: Insights from the Hydrostatic Skeleton Concept
Authors: Igor Vishnevskyi
Abstract:
Introduction: The urinary bladder undergoes repeated strain during its working cycle, suggesting the presence of an efficient support system, force transmission, and mechanical amplification. The concept of a "hydrostatic skeleton" (HS) could contribute to our understanding of the functional relationships among bladder constituents. Methods: A multidisciplinary literature review was conducted to identify key features of the HS and to gather evidence supporting its applicability in urinary bladder biomechanics. The collected evidence was synthesized to propose a framework for understanding the potential hydrostatic properties of the urinary bladder based on existing knowledge and HS principles. Results: Our analysis revealed similarities in biomechanical features between living fluid-filled structures and the urinary bladder. These similarities include the geodesic arrangement of fibres, the role of enclosed fluid (urine) in force transmission, prestress as a determinant of stiffness, and the ability to maintain shape integrity during various activities. From a biomechanical perspective, urine may be considered an essential component of the bladder. The hydrostatic skeleton, with its autonomy and flexibility, may provide insights for researchers involved in bladder engineering. Discussion: The concept of a hydrostatic skeleton offers a holistic perspective for understanding bladder function by considering multiple mechanical factors as a single structure with emergent properties. Incorporating viewpoints from various fields on HS can help identify how this concept applies to live fluid-filled structures or organs and reveal its broader relevance to biological systems, both natural and artificial. Conclusion: The hydrostatic skeleton (HS) design principle can be applied to the urinary bladder. Understanding the bladder as a structure with HS can be instrumental in biomechanical modelling and engineering. Further research is required to fully elucidate the cellular and molecular mechanisms underlying HS in the bladder.Keywords: hydrostatic skeleton, urinary bladder morphology, shape integrity, prestress, biomechanical modelling
Procedia PDF Downloads 812827 The Advantages of Using DNA-Barcoding for Determining the Fraud in Seafood
Authors: Elif Tugce Aksun Tumerkan
Abstract:
Although seafood is an important part of human diet and categorized highly traded food industry internationally, it is remain overlooked generally in the global food security aspect. Food product authentication is the main interest in the aim of both avoids commercial fraud and to consider the risks that might be harmful to human health safety. In recent years, with increasing consumer demand for regarding food content and it's transparency, there are some instrumental analyses emerging for determining food fraud depend on some analytical methodologies such as proteomic and metabolomics. While, fish and seafood consumed as fresh previously, within advanced technology, processed or packaged seafood consumption have increased. After processing or packaging seafood, morphological identification is impossible when some of the external features have been removed. The main fish and seafood quality-related issues are the authentications of seafood contents such as mislabelling products which may be contaminated and replacement partly or completely, by lower quality or cheaper ones. For all mentioned reasons, truthful consistent and easily applicable analytical methods are needed for assurance the correct labelling and verifying of seafood products. DNA-barcoding methods become popular robust that used in taxonomic research for endangered or cryptic species in recent years; they are used for determining food traceability also. In this review, when comparing the other proteomic and metabolic analysis, DNA-based methods are allowing a chance to identification all type of food even as raw, spiced and processed products. This privilege caused by DNA is a comparatively stable molecule than protein and other molecules. Furthermore showing variations in sequence based on different species and founding in all organisms, make DNA-based analysis more preferable. This review was performed to clarify the main advantages of using DNA-barcoding for determining seafood fraud among other techniques.Keywords: DNA-barcoding, genetic analysis, food fraud, mislabelling, packaged seafood
Procedia PDF Downloads 1702826 Supplier Selection by Considering Cost and Reliability
Authors: K. -H. Yang
Abstract:
Supplier selection problem is one of the important issues of supply chain problems. Two categories of methodologies include qualitative and quantitative approaches which can be applied to supplier selection problems. However, due to the complexities of the problem and lacking of reliable and quantitative data, qualitative approaches are more than quantitative approaches. This study considers operational cost and supplier’s reliability factor and solves the problem by using a quantitative approach. A mixed integer programming model is the primary analytic tool. Analyses of different scenarios with variable cost and reliability structures show that the effectiveness of this approach to the supplier selection problem.Keywords: mixed integer programming, quantitative approach, supplier’s reliability, supplier selection
Procedia PDF Downloads 3852825 Advertising Incentives of National Brands against Private Labels: The Case of OTC Heartburn Drugs
Authors: Lu Liao
Abstract:
The worldwide expansion of private labels over the past two decades not only transformed the choice sets of consumers but also forced manufacturers of national brands to design new marketing strategies to maintain their market positions. This paper empirically analyzes the impact of private labels on advertising incentives of national brands. The paper first develops a consumer demand model that incorporates spillover effects of advertising and finds positive spillovers of national brands’ advertising on demand for private label products. With the demand estimates, the researcher simulates the equilibrium prices and advertising levels for leading national brands in a counterfactual where private labels are eliminated to quantify the changes in national brands’ advertising incentives in response to the rise of private labels.Keywords: advertising, demand estimation, spillover effect, structural model
Procedia PDF Downloads 292824 A Bayesian Model with Improved Prior in Extreme Value Problems
Authors: Eva L. Sanjuán, Jacinto Martín, M. Isabel Parra, Mario M. Pizarro
Abstract:
In Extreme Value Theory, inference estimation for the parameters of the distribution is made employing a small part of the observation values. When block maxima values are taken, many data are discarded. We developed a new Bayesian inference model to seize all the information provided by the data, introducing informative priors and using the relations between baseline and limit parameters. Firstly, we studied the accuracy of the new model for three baseline distributions that lead to a Gumbel extreme distribution: Exponential, Normal and Gumbel. Secondly, we considered mixtures of Normal variables, to simulate practical situations when data do not adjust to pure distributions, because of perturbations (noise).Keywords: bayesian inference, extreme value theory, Gumbel distribution, highly informative prior
Procedia PDF Downloads 1992823 Spare Part Carbon Footprint Reduction with Reman Applications
Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş
Abstract:
Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability
Procedia PDF Downloads 832822 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms
Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.
Abstract:
Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery
Procedia PDF Downloads 1772821 The Analysis of TRACE/PARCS in the Simulation of Ultimate Response Guideline for Lungmen ABWR
Authors: J. R. Wang, W. Y. Li, H. T. Lin, B. H. Lee, C. Shih, S. W. Chen
Abstract:
In this research, the TRACE/PARCS model of Lungmen ABWR has been developed for verification of ultimate response guideline (URG) efficiency. This ultimate measure was named as DIVing plan, abbreviated from system depressurization, water injection and containment venting. The simulation initial condition is 100% rated power/100% rated core flow. This research focuses on the estimation of the time when the fuel might be damaged with no water injection by using TRACE/PARCS first. Then, the effect of the reactor core isolation system (RCIC), control depressurization and ac-independent water addition system (ACIWA), which can provide the injection with 950 gpm are also estimated for the station blackout (SBO) transient.Keywords: ABWR, TRACE, safety analysis, PARCS
Procedia PDF Downloads 4562820 Analysis of Secondary Peak in Hα Emission Profile during Gas Puffing in Aditya Tokamak
Authors: Harshita Raj, Joydeep Ghosh, Rakesh L. Tanna, Prabal K. Chattopadhyay, K. A. Jadeja, Sharvil Patel, Kaushal M. Patel, Narendra C. Patel, S. B. Bhatt, V. K. Panchal, Chhaya Chavda, C. N. Gupta, D. Raju, S. K. Jha, J. Raval, S. Joisa, S. Purohit, C. V. S. Rao, P. K. Atrey, Umesh Nagora, R. Manchanda, M. B. Chowdhuri, Nilam Ramaiya, S. Banerjee, Y. C. Saxena
Abstract:
Efficient gas fueling is a critical aspect that needs to be mastered in order to maintain plasma density, to carry out fusion. This requires a fair understanding of fuel recycling in order to optimize the gas fueling. In Aditya tokamak, multiple gas puffs are used in a precise and controlled manner, for hydrogen fueling during the flat top of plasma discharge which has been instrumental in achieving discharges with enhanced density as well as energy confinement time. Following each gas puff, we observe peaks in temporal profile of Hα emission, Soft X-ray (SXR) and chord averaged electron density in a number of discharges, indicating efficient gas fueling. Interestingly, Hα temporal profile exhibited an additional peak following the peak corresponding to each gas puff. These additional peak Hα appeared in between the two gas puffs, indicating the presence of a secondary hydrogen source apart from the gas puffs. A thorough investigation revealed that these secondary Hα peaks coincide with Hard X- ray bursts which come from the interaction of runaway electrons with vessel limiters. This leads to consider that the runaway electrons (REs), which hit the wall, in turn, bring out the absorbed hydrogen and oxygen from the wall and makes the interaction of REs with limiter a secondary hydrogen source. These observations suggest that runaway electron induced recycling should also be included in recycling particle source in the particle balance calculations in tokamaks. Observation of two Hα peaks associated with one gas puff and their roles in enhancing and maintaining plasma density in Aditya tokamak will be discussed in this paper.Keywords: fusion, gas fueling, recycling, Tokamak, Aditya
Procedia PDF Downloads 4022819 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process
Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand
Abstract:
This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping
Procedia PDF Downloads 542818 Impacts of Exchange Rate and Inflation Rate on Foreign Direct Investment in Pakistan
Authors: Saad Bin Nasir
Abstract:
The study identifies the impact of inflation and foreign exchange rate on foreign direct investment in Pakistan. Inflation and exchange rates are used as independent variables and foreign direct investment is taken as dependent variable. Discreet time series data has been used from the period of 1999 to 2009. The results of regression analysis reveal that high inflation has negative impact on foreign direct investment and higher exchange rates has positive impact on foreign direct investment in Pakistan. The inflation and foreign exchange rates both are insignificant in the analysis.Keywords: inflation rate, foreign exchange rate, foreign direct investment, foreign assets
Procedia PDF Downloads 4222817 Copula-Based Estimation of Direct and Indirect Effects in Path Analysis Models
Authors: Alam Ali, Ashok Kumar Pathak
Abstract:
Path analysis is a statistical technique used to evaluate the direct and indirect effects of variables in path models. One or more structural regression equations are used to estimate a series of parameters in path models to find the better fit of data. However, sometimes the assumptions of classical regression models, such as ordinary least squares (OLS), are violated by the nature of the data, resulting in insignificant direct and indirect effects of exogenous variables. This article aims to explore the effectiveness of a copula-based regression approach as an alternative to classical regression, specifically when variables are linked through an elliptical copula.Keywords: path analysis, copula-based regression models, direct and indirect effects, k-fold cross validation technique
Procedia PDF Downloads 432816 Ingratiation as a Moderator of the Impact of the Perception of Organizational Politics on Job Satisfaction
Authors: Triana Fitriastuti, Pipiet Larasatie, Alex Vanderstraten
Abstract:
Many scholars have demonstrated the negative impacts of the perception of organizational politics on organizational outcomes. The model proposed in this study analyzes the impact of the perception of organizational politics on job satisfaction. In the same way, ingratiation as a moderator variable is tested. We applied regression analysis to test the hypothesis. The findings of the current research, which was conducted with 240 employees in the public sector in Indonesia, show that the perception of organizational politics has a negative effect on job satisfaction. In contrast, ingratiation plays a role that fully moderates the relationship between organizational politics and organizational outcomes and changes the correlation between the perception of organizational politics on job satisfaction. Employees who use ingratiation as a coping mechanism tend to do so when they perceive a high degree of organizational politics.Keywords: ingratiation, impression management, job satisfaction, perception of organizational politics
Procedia PDF Downloads 1572815 Economic Assessment Methodology to Support Decisions for Transport Infrastructure Development
Authors: Dimitrios J. Dimitriou
Abstract:
The decades after the end of the second War provide evidence that infrastructures investments contibute to economic development, on terms of productivity and income growth. In order to force productivity and increase competitiveness the financing of large transport infrastructure projects are on the top of the agenda in strategic planning process. Such a decision may take form some days to some decades and stakeholders as well as decision makers need tools in order to estimate the economic impact on natioanl economy of such an investment. The key question in such decisions is if the effects caused by the new infrastructure could be able to boost economic development on one hand, and create new jobs and activities on the other. This paper deals with the review of estimation of the mega transport infrastructure projects economic effects in economy.Keywords: economic impact, transport infrastructure, strategic planning, decision making
Procedia PDF Downloads 2912814 Applying Sequential Pattern Mining to Generate Block for Scheduling Problems
Authors: Meng-Hui Chen, Chen-Yu Kao, Chia-Yu Hsu, Pei-Chann Chang
Abstract:
The main idea in this paper is using sequential pattern mining to find the information which is helpful for finding high performance solutions. By combining this information, it is defined as blocks. Using the blocks to generate artificial chromosomes (ACs) could improve the structure of solutions. Estimation of Distribution Algorithms (EDAs) is adapted to solve the combinatorial problems. Nevertheless many of these approaches are advantageous for this application, but only some of them are used to enhance the efficiency of application. Generating ACs uses patterns and EDAs could increase the diversity. According to the experimental result, the algorithm which we proposed has a better performance to solve the permutation flow-shop problems.Keywords: combinatorial problems, sequential pattern mining, estimationof distribution algorithms, artificial chromosomes
Procedia PDF Downloads 6132813 Unraveling the Mysteries of the Anahata Nada to Achieve Supreme Consciousness
Authors: Shanti Swaroop Mokkapati
Abstract:
The unstruck sound, or the Anahata Nada, holds the key in elevating the consciousness levels of the practitioner. This has been well established by the great saints of the eastern tradition over the past few centuries. This paper intends to explore in-depth the common thread of the practice of Anahata Nada by the musical saints, examining the subtle mention in their compositions as well as demystifying their musical experiences that throw insights into elevated levels of consciousness. Mian Tansen, one of the greatest musicians in the North Indian Hindustani Classical Music tradition and who lived in the 15th century, is said to have brought rain through his singing of Raga Megh Malhar. The South Indian (Carnatic) Musical Saint Tyagaraja, who lived in the 18th Century, composed hundreds of musical pieces full of love for the Supreme Being. Many of these compositions unravel the secrets of Anahata Nada, the chakras in the human body that hold key to these practices, and the visions of elevated levels of consciousness that Saint Tyagaraja himself experienced through these practices. The spiritual practitioners of the Radhasoami Faith (Religion of Saints) in Dayalbagh, India, have adopted a practice called Surat Shabda Yoga (Meditational practices that unite the all-pervasive sound current with the spirit current and elevate levels of consciousness). The practitioners of this Yogic method submit that they have been able to hear mystic words including Om, Racing, Soham, Sat, and Radhasoami, along with instrumental sounds that accompany these mystic words in the form of a crescendo. These prolific experiences of elevated consciousness of musical saints are numerous, and this paper intends to explore more significant ones from many centuries in the past till the present day, where elevated consciousness levels of practitioners are being scientifically measured and analyzed using quantum computing.Keywords: Anahata Nada, Nada Yoga, Tyagaraja, Radhasoami
Procedia PDF Downloads 1792812 Using SNAP and RADTRAD to Establish the Analysis Model for Maanshan PWR Plant
Authors: J. R. Wang, H. C. Chen, C. Shih, S. W. Chen, J. H. Yang, Y. Chiang
Abstract:
In this study, we focus on the establishment of the analysis model for Maanshan PWR nuclear power plant (NPP) by using RADTRAD and SNAP codes with the FSAR, manuals, and other data. In order to evaluate the cumulative dose at the Exclusion Area Boundary (EAB) and Low Population Zone (LPZ) outer boundary, Maanshan NPP RADTRAD/SNAP model was used to perform the analysis of the DBA LOCA case. The analysis results of RADTRAD were similar to FSAR data. These analysis results were lower than the failure criteria of 10 CFR 100.11 (a total radiation dose to the whole body, 250 mSv; a total radiation dose to the thyroid from iodine exposure, 3000 mSv).Keywords: RADionuclide, transport, removal, and dose estimation (RADTRAD), symbolic nuclear analysis package (SNAP), dose, PWR
Procedia PDF Downloads 465