Search results for: automatic recording
239 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 126238 Keeping Education Non-Confessional While Teaching Children about Religion
Authors: Tünde Puskás, Anita Andersson
Abstract:
This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.Keywords: non-confessional education, preschool, religion, tradition
Procedia PDF Downloads 158237 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain
Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae
Abstract:
Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS
Procedia PDF Downloads 42236 Queuing Analysis and Optimization of Public Vehicle Transport Stations: A Case of South West Ethiopia Region Vehicle Stations
Authors: Mequanint Birhan
Abstract:
Modern urban environments present a dynamically growing field where, notwithstanding shared goals, several mutually conflicting interests frequently collide. However, it has a big impact on the city's socioeconomic standing, waiting lines and queues are common occurrences. This results in extremely long lines for both vehicles and people on incongruous routes, service coagulation, customer murmuring, unhappiness, complaints, and looking for other options sometimes illegally. The root cause of this is corruption, which leads to traffic jams, stopping, and packing vehicles beyond their safe carrying capacity, and violating the human rights and freedoms of passengers. This study focused on the optimizing time of passengers had to wait in public vehicle stations. This applied research employed both data gathering sources and mixed approaches, then 166 samples of key informants of transport station were taken by using the Slovin sampling formula. The length of time vehicles, including the drivers and auxiliary drivers ‘Weyala', had to wait was also studied. To maximize the service level at vehicle stations, a queuing model was subsequently devised ‘Menaharya’. Time, cost, and quality encompass performance, scope, and suitability for the intended purposes. The minimal response time for passengers and vehicles queuing to reach their final destination at the stations of the Tepi, Mizan, and Bonga towns was determined. A new bus station system was modeled and simulated by Arena simulation software in the chosen study area. 84% improvement on cost reduced by 56.25%, time 4hr to 1.5hr, quality, safety and designed load performance calculations employed. Stakeholders are asked to put the model into practice and monitor the results obtained.Keywords: Arena 14 automatic rockwell, queue, transport services, vehicle stations
Procedia PDF Downloads 77235 Identifying Artifacts in SEM-EDS of Fouled RO Membranes Used for the Treatment of Brackish Groundwater Through Raman and ICP-MS Analysis
Authors: Abhishek Soti, Aditya Sharma, Akhilendra Bhushan Gupta
Abstract:
Fouled reverse osmosis membranes are primarily characterized by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectrometer (EDS) for a detailed investigation of foulants; however, this has severe limitations on several accounts. Apart from inaccuracy in spectral properties and inevitable interferences and interactions between sample and instrument, misidentification of elements due to overlapping peaks is a significant drawback of EDS. This paper discusses this limitation by analyzing fouled polyamide RO membranes derived from community RO plants of Rajasthan treating brackish water via a combination of results obtained from EDS and Raman spectroscopy and cross corroborating with ICP-MS analysis of water samples prepared by dissolving the deposited salts. The anomalous behavior of different morphic forms of CaCO₃ in aqueous suspensions tends to introduce false reporting of the presence of certain heavy metals and rare earth metals in the scales of the fouled RO membranes used for treating brackish groundwater when analyzed using the commonly adopted techniques like SEM-EDS or Raman spectrometry. Peaks of CaCO₃ reflected in EDS spectra of the membrane were found to be misinterpreted as Scandium due to the automatic assignment of elements by the software. Similarly, the morphic forms merged with the dominant peak of CaCO₃ might be reflected as a single peak of Molybdenum in the Raman spectrum. A subsequent ICP-MS analysis of the deposited salts showed that both Sc and Mo were below detectable levels. It is always essential to cross-confirm the results through a destructive analysis method to avoid such interferences. It is further recommended to study different morphic forms of CaCO₃ scales, as they exhibit anomalous properties like reverse solubility with temperature and hence altered precipitation tendencies, for an accurate description of the composition of scales, which is vital for the smooth functioning of RO systems.Keywords: reverse osmosis, foulant analysis, groundwater, EDS, artifacts
Procedia PDF Downloads 103234 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System
Authors: Min Hae Song, Jooyong Park
Abstract:
Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format
Procedia PDF Downloads 166233 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise
Authors: Michael Scott Evans, Andrew Smith
Abstract:
The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident
Procedia PDF Downloads 180232 Wildland Fire in Terai Arc Landscape of Lesser Himalayas Threatning the Tiger Habitat
Authors: Amit Kumar Verma
Abstract:
The present study deals with fire prediction model in Terai Arc Landscape, one of the most dramatic ecosystems in Asia where large, wide-ranging species such as tiger, rhinos, and elephant will thrive while bringing economic benefits to the local people. Forest fires cause huge economic and ecological losses and release considerable quantities of carbon into the air and is an important factor inflating the global burden of carbon emissions. Forest fire is an important factor of behavioral cum ecological habit of tiger in wild. Post fire changes i.e. micro and macro habitat directly affect the tiger habitat or land. Vulnerability of fire depicts the changes in microhabitat (humus, soil profile, litter, vegetation, grassland ecosystem). Microorganism like spider, annelids, arthropods and other favorable microorganism directly affect by the forest fire and indirectly these entire microorganisms are responsible for the development of tiger (Panthera tigris) habitat. On the other hand, fire brings depletion in prey species and negative movement of tiger from wild to human- dominated areas, which may leads the conflict i.e. dangerous for both tiger & human beings. Early forest fire prediction through mapping the risk zones can help minimize the fire frequency and manage forest fires thereby minimizing losses. Satellite data plays a vital role in identifying and mapping forest fire and recording the frequency with which different vegetation types are affected. Thematic hazard maps have been generated by using IDW technique. A prediction model for fire occurrence is developed for TAL. The fire occurrence records were collected from state forest department from 2000 to 2014. Disciminant function models was used for developing a prediction model for forest fires in TAL, random points for non-occurrence of fire have been generated. Based on the attributes of points of occurrence and non-occurrence, the model developed predicts the fire occurrence. The map of predicted probabilities classified the study area into five classes very high (12.94%), high (23.63%), moderate (25.87%), low(27.46%) and no fire (10.1%) based upon the intensity of hazard. model is able to classify 78.73 percent of points correctly and hence can be used for the purpose with confidence. Overall, also the model works correctly with almost 69% of points. This study exemplifies the usefulness of prediction model of forest fire and offers a more effective way for management of forest fire. Overall, this study depicts the model for conservation of tiger’s natural habitat and forest conservation which is beneficial for the wild and human beings for future prospective.Keywords: fire prediction model, forest fire hazard, GIS, landsat, MODIS, TAL
Procedia PDF Downloads 350231 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box
Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar
Abstract:
To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection
Procedia PDF Downloads 127230 The Use of Punctuation by Primary School Students Writing Texts Collaboratively: A Franco-Brazilian Comparative Study
Authors: Cristina Felipeto, Catherine Bore, Eduardo Calil
Abstract:
This work aims to analyze and compare the punctuation marks (PM) in school texts of Brazilian and French students and the comments on these PM made spontaneously by the students during the ongoing text. Assuming textual genetics as an investigative field within a dialogical and enunciative approach, we defined a common methodological design in two 1st year classrooms (7 years old) of the primary school, one classroom in Brazil (Maceio) and the other one in France (Paris). Through a multimodal capture system of writing processes in real time and space (Ramos System), we recorded the collaborative writing proposal in dyads in each of the classrooms. This system preserves the classroom’s ecological characteristics and provides a video recording synchronized with dialogues, gestures and facial expressions of the students, the stroke of the pen’s ink on the sheet of paper and the movement of the teacher and students in the classroom. The multimodal register of the writing process allowed access to the text in progress and the comments made by the students on what was being written. In each proposed text production, teachers organized their students in dyads and requested that they should talk, combine and write a fictional narrative. We selected a Dyad of Brazilian students (BD) and another Dyad of French students (FD) and we have filmed 6 proposals for each of the dyads. The proposals were collected during the 2nd Term of 2013 (Brazil) and 2014 (France). In 6 texts written by the BD there were identified 39 PMs and 825 written words (on average, a PM every 23 words): Of these 39 PMs, 27 were highlighted orally and commented by either student. In the texts written by the FD there were identified 48 PMs and 258 written words (on average, 1 PM every 5 words): Of these 48 PM, 39 were commented by the French students. Unlike what the studies on punctuation acquisition point out, the PM that occurred the most were hyphens (BD) and commas (FD). Despite the significant difference between the types and quantities of PM in the written texts, the recognition of the need for writing PM in the text in progress and the comments have some common characteristics: i) the writing of the PM was not anticipated in relation to the text in progress, then they were added after the end of a sentence or after the finished text itself; ii) the need to add punctuation marks in the text came after one of the students had ‘remembered’ that a particular sign was needed; iii) most of the PM inscribed were not related to their linguistic functions, but the graphic-visual feature of the text; iv) the comments justify or explain the PM, indicating metalinguistic reflections made by the students. Our results indicate how the comments of the BD and FD express the dialogic and subjective nature of knowledge acquisition. Our study suggests that the initial learning of PM depends more on its graphic features and interactional conditions than on its linguistic functions.Keywords: collaborative writing, erasure, graphic marks, learning, metalinguistic awareness, textual genesis
Procedia PDF Downloads 161229 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model
Authors: Anshika Kankane, Dongshik Kang
Abstract:
Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching
Procedia PDF Downloads 104228 Changes in Blood Pressure in a Longitudinal Cohort of Vietnamese Women
Authors: Anh Vo Van Ha, Yun Zhao, Luat Cong Nguyen, Tan Khac Chu, Phung Hoang Nguyen, Minh Ngoc Pham, Colin W. Binns, Andy H. Lee
Abstract:
This study aims to study longitudinal changes in blood pressure (BP) during the 1-year postpartum period and to evaluate the influence of parity, maternal age at delivery, prepregnancy BMI, gestational weight gain, gestational age at delivery and postpartum maternal weight. A prospective longitudinal cohort study of 883 singleton Vietnamese women was conducted in Hanoi, Haiphong, and Ho Chi Minh City, Vietnam during 2015-2017. Women diagnosed with gestational diabetes mellitus at 24-28 weeks of gestation, pre-eclampsia, and hypoglycemia was excluded from analysis. BP was repeatedly measured at discharge, 6 and 12 months postpartum using automatic blood pressure monitors. Linear mixed model with repeated measures was used to describe changes occurring during pregnancy to 1-year postpartum. Parity, self-reported prepregnancy BMI, gestational weight gain, maternal age and gestational age at delivery will be treated as time-invariant variables and measured maternal weight will be treated as a time-varying variable in models. Women with higher measured postpartum weight had higher mean systolic blood pressure (SBP), 0.20 mmHg, 95% CI [0.12, 0.28]. Similarly, women with higher measured postpartum weight had higher mean diastolic blood pressure (DBP), 0.15 mmHg, 95% CI [0.08, 0.23]. These differences were both statistically significant, P < 0.001. There were no differences in SBP and DBP depending on parity, maternal age at delivery, prepregnancy BMI, gestational weight gain and gestational age at delivery. Compared with discharge measurement, SBP was significantly higher in 6 months postpartum, 6.91 mmHg, 95% CI [6.22, 7.59], and 12 months postpartum, 6.39 mmHg, 95% CI [5.64, 7.15]. Similarly, DBP was also significantly higher in 6 and months postpartum than at discharge, 10.46 mmHg 95% CI [9.75, 11.17], and 11.33 mmHg 95% CI [10.54, 12.12]. In conclusion, BP measured repeatedly during the postpartum period (6 and 12 months postpartum) showed a statistically significant increase, compared with after discharge from the hospital. Maternal weight was a significant predictor of postpartum blood pressure over the 1-year postpartum period.Keywords: blood pressure, maternal weight, postpartum, Vietnam
Procedia PDF Downloads 203227 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 106226 Assessment of N₂ Fixation and Water-Use Efficiency in a Soybean-Sorghum Rotation System
Authors: Mmatladi D. Mnguni, Mustapha Mohammed, George Y. Mahama, Alhassan L. Abdulai, Felix D. Dakora
Abstract:
Industrial-based nitrogen (N) fertilizers are justifiably credited for the current state of food production across the globe, but their continued use is not sustainable and has an adverse effect on the environment. The search for greener and sustainable technologies has led to an increase in exploiting biological systems such as legumes and organic amendments for plant growth promotion in cropping systems. Although the benefits of legume rotation with cereal crops have been documented, the full benefits of soybean-sorghum rotation systems have not been properly evaluated in Africa. This study explored the benefits of soybean-sorghum rotation through assessing N₂ fixation and water-use efficiency of soybean in rotation with sorghum with and without organic and inorganic amendments. The field trials were conducted from 2017 to 2020. Sorghum was grown on plots previously cultivated to soybean and vice versa. The succeeding sorghum crop received fertilizer amendments [organic fertilizer (5 tons/ha as poultry litter, OF); inorganic fertilizer (80N-60P-60K) IF; organic + inorganic fertilizer (OF+IF); half organic + inorganic fertilizer (HIF+OF); organic + half inorganic fertilizer (OF+HIF); half organic + half inorganic (HOF+HIF) and control] and was arranged in a randomized complete block design. The soybean crop succeeding fertilized sorghum received a blanket application of triple superphosphate at 26 kg P ha⁻¹. Nitrogen fixation and water-use efficiency were respectively assessed at the flowering stage using the ¹⁵N and ¹³C natural abundance techniques. The results showed that the shoot dry matter of soybean plants supplied with HOF+HIF was much higher (43.20 g plant-1), followed by OF+HIF (36.45 g plant⁻¹), and HOF+IF (33.50 g plant⁻¹). Shoot N concentration ranged from 1.60 to 1.66%, and total N content from 339 to 691 mg N plant⁻¹. The δ¹⁵N values of soybean shoots ranged from -1.17‰ to -0.64‰, with plants growing on plots previously treated to HOF+HIF exhibiting much higher δ¹⁵N values, and hence lower percent N derived from N₂ fixation (%Ndfa). Shoot %Ndfa values varied from 70 to 82%. The high %Ndfa values obtained in this study suggest that the previous year’s organic and inorganic fertilizer amendments to sorghum did not inhibit N₂ fixation in the following soybean crop. The amount of N-fixed by soybean ranged from 106 to 197 kg N ha⁻¹. The treatments showed marked variations in carbon (C) content, with HOF+HIF treatment recording the highest C content. Although water-use efficiency varied from -29.32‰ to -27.85‰, shoot water-use efficiency, C concentration, and C:N ratio were not altered by previous fertilizer application to sorghum. This study provides strong evidence that previous HOF+HIF sorghum residues can enhance N nutrition and water-use efficiency in nodulated soybean.Keywords: ¹³C and ¹⁵N natural abundance, N-fixed, organic and inorganic fertilizer amendments, shoot %Ndfa
Procedia PDF Downloads 167225 Developing a Self-Healing Concrete Filler Using Poly(Methyl Methacrylate) Based Two-Part Adhesive
Authors: Shima Taheri, Simon Clark
Abstract:
Concrete is an essential building material used in the majority of structures. Degradation of concrete over time increases the life-cycle cost of an asset with an estimated annual cost of billions of dollars to national economies. Most of the concrete failure occurs due to cracks, which propagate through a structure and cause weakening leading to failure. Stopping crack propagation is thus the key to protecting concrete structures from failure and is the best way to prevent inconveniences and catastrophes. Furthermore, the majority of cracks occur deep within the concrete in inaccessible areas and are invisible to normal inspection. Few materials intrinsically possess self-healing ability, but one that does is concrete. However, self-healing in concrete is limited to small dormant cracks in a moist environment and is difficult to control. In this project, we developed a method for self-healing of nascent fractures in concrete components through the automatic release of self-curing healing agents encapsulated in breakable nano- and micro-structures. The Poly(methyl methacrylate) (PMMA) based two-part adhesive is encapsulated in core-shell structures with brittle/weak inert shell, synthesized via miniemulsion/solvent evaporation polymerization. Stress fields associated with propagating cracks can break these capsules releasing the healing agents at the point where they are needed. The shell thickness is playing an important role in preserving the content until the final setting of concrete. The capsules can also be surface functionalized with carboxyl groups to overcome the homogenous mixing issues. Currently, this formulated self-healing system can replace up to 1% of cement in a concrete formulation. Increasing this amount to 5-7% in the concrete formulation without compromising compression strength and shrinkage properties, is still under investigation. This self-healing system will not only increase the durability of structures by stopping crack propagation but also allow the use of less cement in concrete construction, thereby adding to the global effort for CO2 emission reduction.Keywords: self-healing concrete, concrete crack, concrete deterioration, durability
Procedia PDF Downloads 115224 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 510223 A Bayesian Approach for Analyzing Academic Article Structure
Authors: Jia-Lien Hsu, Chiung-Wen Chang
Abstract:
Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach
Procedia PDF Downloads 330222 Effects of Classroom-Based Intervention on Academic Performance of Pupils with Attention Deficit Hyperactivity Disorder in Inclusive Classrooms in Buea
Authors: John Njikem
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is one of the most commonly diagnosed behavioral disorders in children, associated with this disorder are core symptoms of inattention, hyperactivity and impulsivity. This study was purposely to enlighten and inform teachers, policy makers and other professionals concern in the education of this group of learners in inclusive schools in Buea, Cameroon. The major purpose of this study was to identify children with ADHD in elementary schools practicing inclusive education and to investigate the effect of classroom based intervention on their academic performance. The research problem stems from the fact that majority of children with ADHD in our school mostly have problems with classroom tasks like paying attention, easily distracted, and difficulties in organization and very little has been done to manage this numerous conditions, therefore it was necessary for the researcher to identify them and implement some inclusive strategies that teachers can better use in managing the behavior of this group of learners. There were four research questions and the study; the sample population used for the study was 27 pupils (3-7years old) formally identified with key symptoms of ADHD from primary 3-6 from four primary inclusive schools in Buea. Two sub-types of ADHD children were identified by using the recent DSM-IV behavioral checklist in recording their behavior after teacher and peer nomination they were later subjected to three groups for classroom intervention. Data collection was done by using interviews and other supportive methods such as document consultation, field notes and informal talks as additional sources was also used to gather information. Classroom Intervention techniques were carried out by the teachers themselves for 8 weeks under the supervision of the researcher, results were recorded for the 27 children's academic performance in the areas of math’s, writing and reading. Descriptive Statistics was applied in analyzing the data in percentages while tables and diagrams were used to represent the results. Findings obtained indicated that there was significant increase in the level of attention and organization on classroom tasks in the areas of reading, writing and mathematics. Finding also show that there was a more significant improvement made on their academic performance using the combined intervention approach which was proven to be the most effective intervention technique for pupils with ADHD in the study. Therefore it is necessary that teachers in inclusive primary schools in Buea understand the needs of these children and learn how to identify them and also use this intervention approaches to accommodate them in classroom task in order to encourage inclusive educational classroom practices in the country. Recommendations were based on each research objective and suggestions for further studies centered on other methods of classroom intervention for ADHD children in inclusive settings.Keywords: attention deficit hyperactivity disorder, inclusive classrooms, academic performance, impulsivity
Procedia PDF Downloads 250221 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation
Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi
Abstract:
In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.Keywords: biomass burning, photochemical chamber, particulate matter, tracers
Procedia PDF Downloads 192220 Code-Switching as a Bilingual Phenomenon among Students in Prishtina International Schools
Authors: Festa Shabani
Abstract:
This paper aims at investigating bilingual speech in the International Schools of Prishtina. More particularly, it seeks to analyze bilingual phenomena among adolescent students highly exposed to English with the latter as the language of instruction at school in naturally-occurring conversations within school environment. Adolescence was deliberately chosen since it is regarded as an age when peer influence on language choice is the greatest. Driven by daily unsystematic observation and prior research already undertaken, the hypothesis stated is that Albanian continues to be the dominant language among Prishtina international schools’ students with a lot of code-switched items from the English. Furthermore, they will also use lexical borrowings - words already adapted in the receiving language, from the language they have been in contact with, in their speech often in the lack of existing equivalents in Albanian or for other reasons. This is done owing to the fact that the language of instruction at school is English, and any topic related to the language they have been exposed to will trigger them to use English. Therefore, this needs special attention in an attempt to identify patterns of their speech; in this way, linguistic and socio-pragmatic factors will be considered when analyzing the motivations behind their language choice. Methodology for collecting data include participant systematic observation and tape-recording. While observing them in their natural conversations, the fieldworker also took notes, which helped transcribe details better. The paper starts by raising the question of whether code-switching is occurring among Prishtina International Schools’ students highly exposed to English. The data gathered from students in informal settings suggests that there are well-founded grounds for an affirmative answer. The participants in this study are observed to be code-switching, although showing differences in degree. However, a generalization cannot be made on the basis of the findings except in so far it appears that English has, in turn, became a language to which they turn when identifying with the group when discussing about particular school topics. Particularly, participants seemed to use intra-sentential CS in cases when they seem to find an English expression rather easier than an Albanian one when repeating or emphasizing a point when urged to talk about educational issues with English being their language of instruction, and inter-sentential code-switching, particularly when quoting others. Concerning the grammatical aspect of code-switching, the intrasentential CS is used more than the intersentetial one. Speaking of gender, the results show that there were really no significant differences in regards quantity between male and female participants. However, the slight tendency for men to code switch intrasententially more than women was manifested. Similarly, a slight tendency again for a difference to emerge is on intersentential switching, which contributes 21% to the total number of switches for women, but 11% to the total number of switches for men.Keywords: Albanian, code-switching contact linguistics, bilingual phenomena, lexical borrowing, English
Procedia PDF Downloads 125219 Automatic Generation of Census Enumeration Area and National Sampling Frame to Achieve Sustainable Development Goals
Authors: Sarchil H. Qader, Andrew Harfoot, Mathias Kuepie, Sabrina Juran, Attila Lazar, Andrew J. Tatem
Abstract:
The need for high-quality, reliable, and timely population data, including demographic information, to support the achievement of the sustainable development goals (SDGs) in all countries was recognized by the United Nations' 2030 Agenda for sustainable development. However, many low and middle-income countries lack reliable and recent census data. To achieve reliable and accurate census and survey outputs, up-to-date census enumeration areas and digital national sampling frames are critical. Census enumeration areas (EAs) are the smallest geographic units for collection, disseminating, and analyzing census data and are often used as a national sampling frame to serve various socio-economic surveys. Even for countries that are wealthy and stable, creating and updating EAs is a difficult yet crucial step in preparing for a national census. Such a process is commonly done manually, either by digitizing small geographic units on high-resolution satellite imagery or walking the boundaries of units, both of which are extremely expensive. We have developed a user-friendly tool that could be employed to generate draft EA boundaries automatically. The tool is based on high-resolution gridded population and settlement datasets, GPS household locations, building footprints and uses publicly available natural, man-made and administrative boundaries. Initial outputs were produced in Burkina Faso, Paraguay, Somalia, Togo, Niger, Guinea, and Zimbabwe. The results indicate that the EAs are in line with international standards, including boundaries that are easily identifiable and follow ground features, have no overlaps, are compact and free of pockets and disjoints, and the boundaries are nested within administrative boundaries.Keywords: enumeration areas, national sampling frame, gridded population data, preEA tool
Procedia PDF Downloads 142218 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces
Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang
Abstract:
Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide
Procedia PDF Downloads 433217 Gold Nano Particle as a Colorimetric Sensor of HbA0 Glycation Products
Authors: Ranjita Ghoshmoulick, Aswathi Madhavan, Subhavna Juneja, Prasenjit Sen, Jaydeep Bhattacharya
Abstract:
Type 2 diabetes mellitus (T2DM) is a very complex and multifactorial metabolic disease where the blood sugar level goes up. One of the major consequence of this elevated blood sugar is the formation of AGE (Advance Glycation Endproducts), from a series of chemical or biochemical reactions. AGE are detrimental because it leads to severe pathogenic complications. They are a group of structurally diverse chemical compounds formed from nonenzymatic reactions between the free amino groups (-NH2) of proteins and carbonyl groups (>C=O) of reducing sugars. The reaction is known as Maillard Reaction. It starts with the formation of reversible schiff’s base linkage which after sometime rearranges itself to form Amadori Product along with dicarbonyl compounds. Amadori products are very unstable hence rearrangement goes on until stable products are formed. During the course of the reaction a lot of chemically unknown intermediates and reactive byproducts are formed that can be termed as Early Glycation Products. And when the reaction completes, structurally stable chemical compounds are formed which is termed as Advanced Glycation Endproducts. Though all glycation products have not been characterized well, some fluorescence compounds e.g pentosidine, Malondialdehyde (MDA) or carboxymethyllysine (CML) etc as AGE and α-dicarbonyls or oxoaldehydes such as 3-deoxyglucosone (3-DG) etc as the intermediates have been identified. In this work Gold NanoParticle (GNP) was used as an optical indicator of glycation products. To achieve faster glycation kinetics and high AGE accumulation, fructose was used instead of glucose. Hemoglobin A0 (HbA0) was fructosylated by in-vitro method. AGE formation was measured fluorimetrically by recording emission at 450nm upon excitation at 350nm. Thereafter this fructosylated HbA0 was fractionated by column chromatography. Fractionation separated the proteinaceous substance from the AGEs. Presence of protein part in the fractions was confirmed by measuring the intrinsic protein fluorescence and Bradford reaction. GNPs were synthesized using the templates of chromatographically separated fractions of fructosylated HbA0. Each fractions gave rise to GNPs of varying color, indicating the presence of distinct set of glycation products differing structurally and chemically. Clear solution appeared due to settling down of particles in some vials. The reactive groups of the intermediates kept the GNP formation mechanism on and did not lead to a stable particle formation till Day 10. Whereas SPR of GNP showed monotonous colour for the fractions collected in case of non fructosylated HbA0. Our findings accentuate the use of GNPs as a simple colorimetric sensing platform for the identification of intermediates of glycation reaction which could be implicated in the prognosis of the associated health risk due to T2DM and others.Keywords: advance glycation endproducts, glycation, gold nano particle, sensor
Procedia PDF Downloads 302216 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 142215 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 392214 Artificial Intelligence Protecting Birds against Collisions with Wind Turbines
Authors: Aleksandra Szurlej-Kielanska, Lucyna Pilacka, Dariusz Górecki
Abstract:
The dynamic development of wind energy requires the simultaneous implementation of effective systems minimizing the risk of collisions between birds and wind turbines. Wind turbines are installed in more and more challenging locations, often close to the natural environment of birds. More and more countries and organizations are defining guidelines for the necessary functionality of such systems. The minimum bird detection distance, trajectory tracking, and shutdown time are key factors in eliminating collisions. Since 2020, we have continued the survey on the validation of the subsequent version of the BPS detection and reaction system. Bird protection system (BPS) is a fully automatic camera system which allows one to estimate the distance of the bird to the turbine, classify its size and autonomously undertake various actions depending on the bird's distance and flight path. The BPS was installed and tested in a real environment at a wind turbine in northern Poland and Central Spain. The performed validation showed that at a distance of up to 300 m, the BPS performs at least as well as a skilled ornithologist, and large bird species are successfully detected from over 600 m. In addition, data collected by BPS systems installed in Spain showed that 60% of the detections of all birds of prey were from individuals approaching the turbine, and these detections meet the turbine shutdown criteria. Less than 40% of the detections of birds of prey took place at wind speeds below 2 m/s while the turbines were not working. As shown by the analysis of the data collected by the system over 12 months, the system classified the improved size of birds with a wingspan of more than 1.1 m in 90% and the size of birds with a wingspan of 0.7 - 1 m in 80% of cases. The collected data also allow the conclusion that some species keep a certain distance from the turbines at a wind speed of over 8 m/s (Aquila sp., Buteo sp., Gyps sp.), but Gyps sp. and Milvus sp. remained active at this wind speed on the tested area. The data collected so far indicate that BPS is effective in detecting and stopping wind turbines in response to the presence of birds of prey with a wingspan of more than 1 m.Keywords: protecting birds, birds monitoring, wind farms, green energy, sustainable development
Procedia PDF Downloads 74213 Analysis of the Impact of Suez Canal on the Robustness of Global Shipping Networks
Abstract:
The Suez Canal plays an important role in global shipping networks and is one of the most frequently used waterways in the world. The 2021 canal obstruction by ship Ever Given in March 2021, however, completed blocked the Suez Canal for a week and caused significant disruption to world trade. Therefore, it is very important to quantitatively analyze the impact of the accident on the robustness of the global shipping network. However, the current research on maritime transportation networks is usually limited to local or small-scale networks in a certain region. Based on the complex network theory, this study establishes a global shipping complex network covering 2713 nodes and 137830 edges by using the real trajectory data of the global marine transport ship automatic identification system in 2018. At the same time, two attack modes, deliberate (Suez Canal Blocking) and random, are defined to calculate the changes in network node degree, eccentricity, clustering coefficient, network density, network isolated nodes, betweenness centrality, and closeness centrality under the two attack modes, and quantitatively analyze the actual impact of Suez Canal Blocking on the robustness of global shipping network. The results of the network robustness analysis show that Suez Canal blocking was more destructive to the shipping network than random attacks of the same scale. The network connectivity and accessibility decreased significantly, and the decline decreased with the distance between the port and the canal, showing the phenomenon of distance attenuation. This study further analyzes the impact of the blocking of the Suez Canal on Chinese ports and finds that the blocking of the Suez Canal significantly interferes withChina's shipping network and seriously affects China's normal trade activities. Finally, the impact of the global supply chain is analyzed, and it is found that blocking the canal will seriously damage the normal operation of the global supply chain.Keywords: global shipping networks, ship AIS trajectory data, main channel, complex network, eigenvalue change
Procedia PDF Downloads 181212 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects
Procedia PDF Downloads 270211 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments
Authors: Lana Burmistrova
Abstract:
Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.Keywords: attention, blindness, memory, music learning, strategy
Procedia PDF Downloads 183210 A Study of Lapohan Traditional Pottery Making in Selakan Island, Semporna Sabah: An Initial Framework
Authors: Norhayati Ayob, Shamsu Mohamad
Abstract:
This paper aims to provide an initial background of the process of making traditional ceramic pottery, focusing on the materials and the influence of culture heritage. Ceramic pottery is one of the hallmarks of Sabah’s heirloom, not only use as cooking and storage containers but also closely linked with folk cultures and heritage. The Bajau Laut ethnic community of Semporna or better known as the Sea Gypsies, mostly are boat dwellers and work as fishermen in the coast. This ethnic community is famous for their own artistic traditional heirloom, especially the traditional hand-made clay stove called Lapohan. It is found that in the daily life of Bajau Laut community, Lapohan (clay stove) is used to prepare the meal and as a food warmer while they are at the sea. Besides, Lapohan pottery conveys symbolic meaning of natural objects, which portrays the identity, and values of Bajau Laut community. It is acknowledged that the basic process of making potterywares was much the same for people all across the world, nevertheless, it is crucial to consider that different ethnic groups may have their own styles and choices of raw materials. Furthermore, it is still unknown why and how the Bajau Laut ethnic of Semporna get started making their own pottery and to survive until today by heavily depending on the raw materials available in Semporna. In addition, the emergent problem faced by the pottery maker in Sabah is the absence of young successor to continue the heirloom legacy. Therefore, this research aims to explore the traditional pottery making in Sabah, by investigating the background history of Lapohan pottery and to propose the classification of Lapohan based on design and motifs of traditional pottery that will be recognised throughout the study. It is postulated that different techniques and forms of making traditional pottery may produce different types of pottery in terms of surface decoration, shape, and size that portrays different cultures. This study will be conducted at Selakan Island, Semporna, which is the only location that still has Lapohan making. This study is also based on the chronological process of making pottery and taboos of the process of preparing the clay, forming, decoration technique, motif application and firing techniques. The relevant information for the study will be gathered from field study, including observation, in-depth interview and video recording. In-depth interviews will be conducted with several potters and the conversation and pottery making process will be recorded in order to understand the actual process of making Lapohan. The findings hope to provide several types of Lapohan based on different designs and cultures, for example, the one with flat-shape design or has round-shape on the top of clay stove will be labeled with suitable name based on their culture. In conclusion, it is hoped that this study will contribute to conservation for traditional pottery making in Sabah as well as to preserve their culture and heirloom for future generations.Keywords: Bajau Laut, culture, Lapohan, traditional pottery
Procedia PDF Downloads 185