Search results for: decision making framework
788 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data
Authors: Martin Pellon Consunji
Abstract:
Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms
Procedia PDF Downloads 123787 Influence of Spirituality on Health Outcomes and General Well-Being in Patients with End-Stage Renal Disease
Authors: Ali A Alshraifeen, Josie Evans, Kathleen Stoddart
Abstract:
End-stage renal disease (ESRD) introduces physical, psychological, social, emotional and spiritual challenges into patients’ lives. Spirituality has been found to contribute to improved health outcomes, mainly in the areas of quality of life (QOL) and well-being. No studies exist to explore the influence of spirituality on the health outcomes and general well-being in patients with end-stage renal disease receiving hemodialysis (HD) treatment in Scotland. This study was conducted to explore spirituality in the daily lives of among these patients and how it may influence their QOL and general well-being. The study employed a qualitative method. Data were collected using semi-structured interviews with a sample of 21 patients. A thematic approach using Framework Analysis informed the qualitative data analysis. Participants were recruited from 11 dialysis units across four Health Boards in Scotland. The participants were regular patients attending the dialysis units three times per week. Four main themes emerged from the qualitative interviews: ‘Emotional and Psychological Turmoil’, ‘Life is Restricted’, ‘Spirituality’ and ‘Other Coping Strategies’. The findings suggest that patients’ QOL might be affected because of the physical challenges such as unremitting fatigue, disease unpredictability and being tied down to a dialysis machine, or the emotional and psychological challenges imposed by the disease into their lives such as wholesale changes, dialysis as a forced choice and having a sense of indebtedness. The findings also revealed that spirituality was an important coping strategy for the majority of participants who took part in the qualitative component (n=16). Different meanings of spirituality were identified including connection with God or Supernatural Being, connection with the self, others and nature/environment. Spirituality encouraged participants to accept their disease and offered them a sense of protection, instilled hope in them and helped them to maintain a positive attitude to carry on with their daily lives, which may have had a positive influence on their health outcomes and general well-being. The findings also revealed that humor was another coping strategy that helped to diffuse stress and anxiety for some participants and encouraged them to carry on with their lives. The findings from this study provide a significant contribution to a very limited body of work. The study contributes to our understanding of spirituality and how people receiving dialysis treatment use it to manage their daily lives. Spirituality is of particular interest due to its connection with health outcomes in patients with chronic illnesses. The link between spirituality and many chronic illnesses has gained some recognition, yet the identification of its influence on the health outcomes and well-being in patients with ESRD is still evolving. There is a need to understand patients’ experiences and examine the factors that influence their QOL and well-being to ensure that the services available are adequately tailored to them. Hence, further research is required to obtain a better understanding of the influence of spirituality on the health outcomes and general well-being of patients with ESRD.Keywords: end-stage renal disease, general well-being, quality of life, spirituality
Procedia PDF Downloads 226786 Clastic Sequence Stratigraphy of Late Jurassic to Early Cretaceous Formations of Jaisalmer Basin, Rajasthan
Authors: Himanshu Kumar Gupta
Abstract:
The Jaisalmer Basin is one of the parts of the Rajasthan basin in northwestern India. The presence of five major unconformities/hiatuses of varying span i.e. at the top of Archean basement, Cambrian, Jurassic, Cretaceous, and Eocene have created the foundation for constructing a sequence stratigraphic framework. Based on basin formative tectonic events and their impact on sedimentation processes three first-order sequences have been identified in Rajasthan Basin. These are Proterozoic-Early Cambrian rift sequence, Permian to Middle-Late Eocene shelf sequence and Pleistocene - Recent sequence related to Himalayan Orogeny. The Permian to Middle Eocene I order sequence is further subdivided into three-second order sequences i.e. Permian to Late Jurassic II order sequence, Early to Late Cretaceous II order sequence and Paleocene to Middle-Late Eocene II order sequence. In this study, Late Jurassic to Early Cretaceous sequence was identified and log-based interpretation of smaller order T-R cycles have been carried out. A log profile from eastern margin to western margin (up to Shahgarh depression) has been taken. The depositional environment penetrated by the wells interpreted from log signatures gave three major facies association. The blocky and coarsening upward (funnel shape), the blocky and fining upward (bell shape) and the erratic (zig-zag) facies representing distributary mouth bar, distributary channel and marine mud facies respectively. Late Jurassic Formation (Baisakhi-Bhadasar) and Early Cretaceous Formation (Pariwar) shows a lesser number of T-R cycles in shallower and higher number of T-R cycles in deeper bathymetry. Shallowest well has 3 T-R cycles in Baisakhi-Bhadasar and 2 T-R cycles in Pariwar, whereas deeper well has 4 T-R cycles in Baisakhi-Bhadasar and 8 T-R cycles in Pariwar Formation. The Maximum Flooding surfaces observed from the stratigraphy analysis indicate major shale break (high shale content). The study area is dominated by the alternation of shale and sand lithologies, which occurs in an approximate ratio of 70:30. A seismo-geological cross section has been prepared to understand the stratigraphic thickness variation and structural disposition of the strata. The formations are quite thick to the west, the thickness of which reduces as we traverse towards the east. The folded and the faulted strata indicated the compressional tectonics followed by the extensional tectonics. Our interpretation is supported with seismic up to second order sequence indicates - Late Jurassic sequence is a Highstand Systems Tract (Baisakhi - Bhadasar formations), and the Early Cretaceous sequence is Regressive to Lowstand System Tract (Pariwar Formation).Keywords: Jaisalmer Basin, sequence stratigraphy, system tract, T-R cycle
Procedia PDF Downloads 134785 Stability Analysis of Hossack Suspension Systems in High Performance Motorcycles
Authors: Ciro Moreno-Ramirez, Maria Tomas-Rodriguez, Simos A. Evangelou
Abstract:
A motorcycle's front end links the front wheel to the motorcycle's chassis and has two main functions: the front wheel suspension and the vehicle steering. Up to this date, several suspension systems have been developed in order to achieve the best possible front end behavior, being the telescopic fork the most common one and already subjected to several years of study in terms of its kinematics, dynamics, stability and control. A motorcycle telescopic fork suspension model consists of a couple of outer tubes which contain the suspension components (coil springs and dampers) internally and two inner tubes which slide into the outer ones allowing the suspension travel. The outer tubes are attached to the frame through two triple trees which connect the front end to the main frame through the steering bearings and allow the front wheel to turn about the steering axis. This system keeps the front wheel's displacement in a straight line parallel to the steering axis. However, there exist alternative suspension designs that allow different trajectories of the front wheel with the suspension travel. In this contribution, the authors investigate an alternative front suspension system (Hossack suspension) and its influence on the motorcycle nonlinear dynamics to identify and reduce stability risks that a new suspension systems may introduce in the motorcycle dynamics. Based on an existing high-fidelity motorcycle mathematical model, the front end geometry is modified to accommodate a Hossack suspension system. It is characterized by a double wishbone design that varies the front end geometry on certain maneuverings and, consequently, the machine's behavior/response. It consists of a double wishbone structure directly attached to the chassis. In here, the kinematics of this system and its impact on the motorcycle performance/stability are analyzed and compared to the well known telescopic fork suspension system. The framework of this research is the mathematical modelling and numerical simulation. Full stability analyses are performed in order to understand how the motorcycle dynamics may be affected by the newly introduced front end design. This study is carried out by a combination of nonlinear dynamical simulation and root-loci methods. A modal analysis is performed in order to get a deeper understanding of the different modes of oscillation and how the Hossack suspension system affects them. The results show that different kinematic designs of a double wishbone suspension systems do not modify the general motorcycle's stability. The normal modes properties remain unaffected by the new geometrical configurations. However, these normal modes differ from one suspension system to the other. It is seen that the normal modes behaviour depends on various important dynamic parameters, such as the front frame flexibility, the steering damping coefficient and the centre of mass location.Keywords: nonlinear mechanical systems, motorcycle dynamics, suspension systems, stability
Procedia PDF Downloads 223784 Humic Acid and Azadirachtin Derivatives for the Management of Crop Pests
Authors: R. S. Giraddi, C. M. Poleshi
Abstract:
Organic cultivation of crops is gaining importance consumer awareness towards pesticide residue free foodstuffs is increasing globally. This is also because of high costs of synthetic fertilizers and pesticides, making the conventional farming non-remunerative. In India, organic manures (such as vermicompost) are an important input in organic agriculture. Though vermicompost obtained through earthworm and microbe-mediated processes is known to comprise most of the crop nutrients, but they are in small amounts thus necessitating enrichment of nutrients so that crop nourishment is complete. Another characteristic of organic manures is that the pest infestations are kept under check due to induced resistance put up by the crop plants. In the present investigation, deoiled neem cake containing azadirachtin, copper ore tailings (COT), a source of micro-nutrients and microbial consortia were added for enrichment of vermicompost. Neem cake is a by-product obtained during the process of oil extraction from neem plant seeds. Three enriched vermicompost blends were prepared using vermicompost (at 70, 65 and 60%), deoiled neem cake (25, 30 and 35%), microbial consortia and COTwastes (5%). Enriched vermicompost was thoroughly mixed, moistened (25+5%), packed and incubated for 15 days at room temperature. In the crop response studies, the field trials on chili (Capsicum annum var. longum) and soybean, (Glycine max cv JS 335) were conducted during Kharif 2015 at the Main Agricultural Research Station, UAS, Dharwad-Karnataka, India. The vermicompost blend enriched with neem cake (known to possess higher amounts of nutrients) and vermicompost were applied to the crops and at two dosages and at two intervals of crop cycle (at sowing and 30 days after sowing) as per the treatment plan along with 50% recommended dose of fertilizer (RDF). 10 plants selected randomly in each plot were studied for pest density and plant damage. At maturity, crops were harvested, and the yields were recorded as per the treatments, and the data were analyzed using appropriate statistical tools and procedures. In the crops, chili and soybean, crop nourishment with neem enriched vermicompost reduced insect density and plant damage significantly compared to other treatments. These treatments registered as much yield (16.7 to 19.9 q/ha) as that realized in conventional chemical control (18.2 q/ha) in soybean, while 72 to 77 q/ha of green chili was harvested in the same treatments, being comparable to the chemical control (74 q/ha). The yield superiority of the treatments was of the order neem enriched vermicompost>conventional chemical control>neem cake>vermicompost>untreated control. The significant features of the result are that it reduces use of inorganic manures by 50% and synthetic chemical insecticides by 100%.Keywords: humic acid, azadirachtin, vermicompost, insect-pest
Procedia PDF Downloads 277783 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 349782 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 397781 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 73780 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action
Authors: Mokhlisur Rahman
Abstract:
The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to actKeywords: motivations, climate, iot, personalized-advertising, action
Procedia PDF Downloads 73779 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 527778 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project
Authors: Juliana C. Salvadori, Jose Carlos Felix
Abstract:
This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson
Procedia PDF Downloads 295777 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105776 Combination Therapies Targeting Apoptosis Pathways in Pediatric Acute Myeloid Leukemia (AML)
Authors: Ahlam Ali, Katrina Lappin, Jaine Blayney, Ken Mills
Abstract:
Leukaemia is the most frequently (30%) occurring type of paediatric cancer. Of these, approximately 80% are acute lymphoblastic leukaemia (ALL) with acute myeloid leukaemia (AML) cases making up the remaining 20% alongside other leukaemias. Unfortunately, children with AML do not have promising prognosis with only 60% surviving 5 years or longer. It has been highlighted recently the need for age-specific therapies for AML patients, with paediatric AML cases having a different mutational landscape compared with AML diagnosed in adult patients. Drug Repurposing is a recognized strategy in drug discovery and development where an already approved drug is used for diseases other than originally indicated. We aim to identify novel combination therapies with the promise of providing alternative more effective and less toxic induction therapy options. Our in-silico analysis highlighted ‘cell death and survival’ as an aberrant, potentially targetable pathway in paediatric AML patients. On this basis, 83 apoptotic inducing compounds were screened. A preliminary single agent screen was also performed to eliminate potentially toxic chemicals, then drugs were constructed into a pooled library with 10 drugs per well over 160 wells, with 45 possible pairs and 120 triples in each well. Seven cell lines were used during this study to represent the clonality of AML in paediatric patients (Kasumi-1, CMK, CMS, MV11-14, PL21, THP1, MOLM-13). Cytotoxicity was assessed up to 72 hours using CellTox™ Green reagent. Fluorescence readings were normalized to a DMSO control. Z-Score was assigned to each well based on the mean and standard deviation of all the data. Combinations with a Z-Score <2 were eliminated and the remaining wells were taken forward for further analysis. A well was considered ‘successful’ if each drug individually demonstrated a Z-Score <2, while the combination exhibited a Z-Score >2. Each of the ten compounds in one well (155) had minimal or no effect as single agents on cell viability however, a combination of two or more of the compounds resulted in a substantial increase in cell death, therefore the ten compounds were de-convoluted to identify a possible synergistic pair/triple combinations. The screen identified two possible ‘novel’ drug pairing, with BCL2 inhibitor ABT-737, combined with either a CDK inhibitor Purvalanol A, or AKT/ PI3K inhibitor LY294002. (ABT-737- 100 nM+ Purvalanol A- 1 µM) (ABT-737- 100 nM+ LY294002- 2 µM). Three possible triple combinations were identified (LY2409881+Akti-1/2+Purvalanol A, SU9516+Akti-1/2+Purvalanol A, and ABT-737+LY2409881+Purvalanol A), which will be taken forward for examining their efficacy at varying concentrations and dosing schedules, across multiple paediatric AML cell lines for optimisation of maximum synergy. We believe that our combination screening approach has potential for future use with a larger cohort of drugs including FDA approved compounds and patient material.Keywords: AML, drug repurposing, ABT-737, apoptosis
Procedia PDF Downloads 203775 The Relationship between Anthropometric Obesity Indices and Insulin in Children with Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
The number of indices developed for the evaluation of obesity both in adults and pediatric population is ever increasing. These indices are also used in cases with metabolic syndrome (MetS), mostly the ultimate form of morbid obesity. Aside from anthropometric measurements, formulas constituted using these parameters also find clinical use. These formulas can be listed as two groups; being weight-dependent and –independent. Some are extremely sophisticated equations and their clinical utility is questionable in routine clinical practice. The aim of this study is to compare presently available obesity indices and find the most practical one. Their associations with MetS components were also investigated to determine their capacities in differential diagnosis of morbid obesity with and without MetS. Children with normal body mass index (N-BMI) and morbid obesity were recruited for this study. Three groups were constituted. Age- and sex- dependent BMI percentiles for morbid obese (MO) children were above 99 according to World Health Organization tables. Of them, those with MetS findings were evaluated as MetS group. Children, whose values were between 85 and 15 were included in N-BMI group. The study protocol was approved by the Ethics Committee of the Institution. Parents filled out informed consent forms to participate in the study. Anthropometric measurements and blood pressure values were recorded. Body mass index, hip index (HI), conicity index (CI), triponderal mass index (TPMI), body adiposity index (BAI), body shape index (ABSI), body roundness index (BRI), abdominal volume index (AVI), waist-to-hip ratio (WHR) and waist circumference+hip circumference/2 ((WC+HC)/2) were the formulas examined within the scope of this study. Routine biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) were performed. Statistical package program SPSS was used for the evaluation of study data. p<0.05 was accepted as the statistical significance degree. Hip index did not differ among the groups. A statistically significant difference was noted between N-BMI and MetS groups in terms of ABSI. All the other indices were capable of making discrimination between N-BMI-MO, N-BMI- MetS and MO-MetS groups. No correlation was found between FBG and any obesity indices in any groups. The same was true for INS in N-BMI group. Insulin was correlated with BAI, TPMI, CI, BRI, AVI and (WC+HC)/2 in MO group without MetS findings. In MetS group, the only index, which was correlated with INS was (WC+HC)/2. These findings have pointed out that complicated formulas may not be required for the evaluation of the alterations among N-BMI and various obesity groups including MetS. The simple easily computable weight-independent index, (WC+HC)/2, was unique, because it was the only index, which exhibits a valuable association with INS in MetS group. It did not exhibit any correlation with other obesity indices showing associations with INS in MO group. It was concluded that (WC+HC)/2 was pretty valuable practicable index for the discrimination of MO children with and without MetS findings.Keywords: children, insulin, metabolic syndrome, obesity indices
Procedia PDF Downloads 77774 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 8773 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 89772 Cognition in Crisis: Unravelling the Link Between COVID-19 and Cognitive-Linguistic Impairments
Authors: Celine Davis
Abstract:
The novel coronavirus 2019 (COVID-19) is an infectious disease caused by the virus SARS-CoV-2, which has detrimental respiratory, cardiovascular, and neurological effects impacting over one million lives in the United States. New researches has emerged indicating long-term neurologic consequences in those who survive COVID-19 infections, including more than seven million Americans and another 27 million people worldwide. These consequences include attentional deficits, memory impairments, executive function deficits and aphasia-like symptoms which fall within the purview of speech-language pathology. The National Health Interview Survey (NHIS) is a comprehensive annual survey conducted by the National Center for Health Statistics (NCHS), a branch of the Centers for Disease Control and Prevention (CDC) in the United States. The NHIS is one of the most significant sources of health-related data in the country and has been conducted since 1957. The longitudinal nature of the study allows for analysis of trends in various variables over the years, which can be essential for understanding societal changes and making treatment recommendations. This current study will utilize NHIS data from 2020-2022 which contained interview questions specifically related to COVID-19. Adult cases of individuals between the ages of 18-50 diagnosed with COVID-19 in the United States during 2020-2022 will be identified using the National Health Interview Survey (NHIS). Multiple regression analysis of self-reported data confirming COVID-19 infection status and challenges with concentration, communication, and memory will be performed. Latent class analysis will be utilized to identify subgroups in the population to indicate whether certain demographic groups have higher susceptibility to cognitive-linguistic deficits associated with COVID-19. Completion of this study will reveal whether there is an association between confirmed COVID-19 diagnosis and heightened incidence of cognitive deficits and subsequent implications, if any, on activities of daily living. This study is distinct in its aim to utilize national survey data to explore the relationship between confirmed COVID-19 diagnosis and the prevalence of cognitive-communication deficits with a secondary focus on resulting activity limitations. To the best of the author’s knowledge, this will be the first large-scale epidemiological study investigating the associations between cognitive-linguistic deficits, COVID-19 and implications on activities of daily living in the United States population. These findings will highlight the need for targeted interventions and support services to address the cognitive-communication needs of individuals recovering from COVID-19, thereby enhancing their overall well-being and functional outcomes.Keywords: cognition, COVID-19, language, limitations, memory, NHIS
Procedia PDF Downloads 53771 Metamorphosis of Caste: An Examination of the Transformation of Caste from a Material to Ideological Phenomenon in Sri Lanka
Authors: Pradeep Peiris, Hasini Lecamwasam
Abstract:
The fluid, ambiguous, and often elusive existence of caste among the Sinhalese in Sri Lanka has inspired many scholarly endeavours. Originally, Sinhalese caste was organized according to the occupational functions assigned to various groups in society. Hence cultivators came to be known as Goyigama, washers Dobi, drummers Berava, smiths Navandanna and so on. During pre-colonial times the specialized services of various groups were deployed to build water reservoirs, cultivate the land, and/or sustain the Buddhist order by material means. However, as to how and why caste prevails today in Sinhalese society when labour is in ideal terms free to move where it wants, or in other words, occupation is no longer strictly determined or restricted by birth, is a question worth exploring. Hence this paper explores how, and perhaps more interestingly why, when the nexus between traditional occupations and caste status is fast disappearing, caste itself has managed to survive and continues to be salient in politics in Sri Lanka. In answer to this larger question, the paper looks at caste from three perspectives: 1) Buddhism, whose ethical project provides a justification of social stratifications that transcends economic bases 2) Capitalism that has reactivated and reproduced archaic relations in a process of 'accumulation by subordination', not only by reinforcing the marginality of peripheral caste groups, but also by exploiting caste divisions to hinder any realization of class interests and 3) Democracy whose supposed equalizing effect expected through its ‘one man–one vote’ approach has been subverted precisely by itself, whereby the aggregate ultimately comes down to how many such votes each ‘group’ in society has. This study draws from field work carried out in Dedigama (in the District of Kegalle, Central Province) and Kelaniya (in the District of Colombo, Western Province) in Sri Lanka over three years. The choice of field locations was encouraged by the need to capture rural and urban dynamics related to caste since caste is more apparently manifest in rural areas whose material conditions partially warrant its prevalence, whereas in urban areas it exists mostly in the ideological terrain. In building its analysis, the study has employed a combination of objectivist and subjectivist approaches to capture the material and ideological existence of caste and caste politics in Sinhalese society. Therefore, methods such as in-depth interviews, observation, and collection of demographical and interpretive data from secondary sources were used for this study. The paper has been situated in a critical theoretical framework of social inquiry in an attempt to question dominant assumptions regarding such meta-labels as ‘Capitalism’ and ‘Democracy’, and also the supposed emancipatory function of religion (focusing on Buddhism).Keywords: Buddhism, capitalism, caste, democracy, Sri Lanka
Procedia PDF Downloads 136770 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164769 Exploring the Benefits of Hiring Individuals with Disabilities in the Workplace
Authors: Rosilyn Sanders
Abstract:
This qualitative study examined the impact of hiring people with intellectual disabilities (ID). The research questions were: What defines a disability? What accommodations are needed to ensure the success of a person with a disability? As a leader, what benefits do people with intellectual disabilities bring to the organization? What are the benefits of hiring people with intellectual disabilities in retail organizations? Moreover, how might people with intellectual disabilities contribute to the organizational culture of retail organizations? A narrative strength approach was used as a theoretical framework to guide the discussion and uncover the benefits of hiring individuals with intellectual disabilities in various retail organizations. Using qualitative interviews, the following themes emerged: diversity and inclusion, accommodations, organizational culture, motivation, and customer service. These findings put to rest some negative stereotypes and perceptions of persons with ID as being unemployable or unable to perform tasks when employed, showing instead that persons with ID can work efficiently when given necessary work accommodations and support in an enabling organizational culture. All participants were recruited and selected through various forms of electronic communication via social media, email invitations, and phone; this was conducted through the methodology of snowball sampling with the following demographics: age, ethnicity, gender, number of years in retail, number of years in management, and number of direct reports. The sample population was employed in several retail organizations throughout Arkansas and Texas. The small sample size for qualitative research in this study helped the researcher develop, build, and maintain close relationships that encouraged participants to be forthcoming and honest with information (Clow & James, 2014 ). Participants were screened to ensure they met the researcher's study; and screened to ensure that they were over 18 years of age. Participants were asked if they recruit, interview, hire, and supervise individuals with intellectual disabilities. Individuals were given consent forms via email to indicate their interest in participating in this study. Due to COVID-19, all interviews were conducted via teleconferencing (Zoom or Microsoft Teams) that lasted approximately 1 hour, which were transcribed, coded for themes, and grouped based on similar responses. Further, the participants were not privy to the interview questions beforehand, and demographic questions were asked at the end, including questions concerning age, education level, and job status. Each participant was assigned random numbers using an app called ‘The Random Number Generator ‘to ensure that all personal or identifying information of participants were removed. Regarding data storage, all documentation was stored on a password-protected external drive, inclusive of consent forms, recordings, transcripts, and researcher notes.Keywords: diversity, positive psychology, organizational development, leadership
Procedia PDF Downloads 67768 Impact of Emotional Intelligence and Cognitive Intelligence on Radio Presenter's Performance in All India Radio, Kolkata, India
Authors: Soumya Dutta
Abstract:
This research paper aims at investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance in the All India Radio, Kolkata (India’s public service broadcaster). The ancient concept of productivity is the ratio of what is produced to what is required to produce it. But, father of modern management Peter F. Drucker (1909-2005) defined productivity of knowledge work and knowledge workers in a new form. In the other hand, the concept of Emotional Intelligence (EI) originated back in 1920’s when Thorndike (1920) for the first time proposed the emotional intelligence into three dimensions, i.e., abstract intelligence, mechanical intelligence, and social intelligence. The contribution of Salovey and Mayer (1990) is substantive, as they proposed a model for emotional intelligence by defining EI as part of the social intelligence, which takes measures the ability of an individual to regulate his/her personal and other’s emotions and feeling. Cognitive intelligence illustrates the specialization of general intelligence in the domain of cognition in ways that possess experience and learning about cognitive processes such as memory. The outcomes of past research on emotional intelligence show that emotional intelligence has a positive effect on social- mental factors of human resource; positive effects of emotional intelligence on leaders and followers in terms of performance, results, work, satisfaction; emotional intelligence has a positive and significant relationship with the teachers' job performance. In this paper, we made a conceptual framework based on theories of emotional intelligence proposed by Salovey and Mayer (1989-1990) and a compensatory model of emotional intelligence, cognitive intelligence, and job performance proposed by Stephen Cote and Christopher T. H. Miners (2006). For investigating the impact of emotional intelligence and cognitive intelligence on radio presenter’s performance, sample size consists 59 radio presenters (considering gender, academic qualification, instructional mood, age group, etc.) from All India Radio, Kolkata station. Questionnaires prepared based on cognitive (henceforth called C based and represented by C1, C2,.., C5) as well as emotional intelligence (henceforth called E based and represented by E1, E2,., E20). These were sent to around 59 respondents (Presenters) for getting their responses. Performance score was collected from the report of program executive of All India Radio, Kolkata. The linear regression has been carried out using all the E-based and C-based variables as the predictor variables. The possible problem of autocorrelation has been tested by having the Durbinson-Watson (DW) Statistic. Values of this statistic, almost within the range of 1.80-2.20, indicate the absence of any significant problem of autocorrelation. The possible problem of multicollinearity has been tested by having the Variable Inflation Factor (VIF) value. Values of this statistic, around within 2, indicates the absence of any significant problem of multicollinearity. It is inferred that the performance scores can be statistically regressed linearly on the E-based and C-based scores, which can explain 74.50% of the variations in the performance.Keywords: cognitive intelligence, emotional intelligence, performance, productivity
Procedia PDF Downloads 164767 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 136766 Architecture, Politics and Religion Synthesis: Political Legitimacy in Early Islamic Iran
Authors: Fahimeh Ghorbani, Alam Saleh
Abstract:
Ideology, politics and art have always been omnipresent patterns of Islam since its early age. The Islamic empire, expanded from China to Andalusia, has instrumentalized art and architecture to enhance political legitimacy of different dynasties or states throughout its history. Quranic verses utilized to convey ideological messages in the major mosques and mausoleums. Iranians had already been employing art and architecture to propagate their political legitimacy prior to Islam. The land of Iran and its art with strong civilizational pre-Islamic history has been profoundly politicized since the rise of Islam in the region. Early Islamic period in Iran has witnessed introduction of a new architectural language, new formulas for spatial configuration in built spaces, as well as new system of architectural decoration. Studying Iran’s Early Islamic architecture helps in better understanding the process of socio-political identity making of Iranian-Islamic culture, and thus art and architecture. This period also set the stage for formation of glorious architectural movements through Persianate world in later periods. During the Early Islamic period in Iran, the innovative combination of Islamic ideology and Iranian Architecture created formidable ideological tools in politicizing art in the region and beyond. As such, this paper aims to investigate the political history and architectural legacy from late Sassanid to Early Islamic period, delves into the ways in which Early Islamic architecture played role in transforming Persian concepts of kingship, administration, and social organization. In so doing, the study focuses on the Perso-Islamic architectural synthesis under the Samanids and Seljuk dynasty as case studies. The paper also explores how the newly introduced Islamic architecture has been employed to address the question of political legitimacy and to propagate states’ political agenda in early Islamic Iran (650-1250). As for the existing literature, despite its uniqueness and significance, Early Islamic architecture of Iran has received little scholarly attention. However, there exists a sizeable body of scholarship on socio-historic condition of the land of Iran during Early Islamic period which provide a solid base for the project. Methodologically speaking, the authors look into the subject through various lenses. They will conduct historic and archival research in libraries, private collections, and archives in Iran and the related neighbouring countries in Persian, Arabic and English. The methods of visual and formal analysis are applied to examine architectural features of the period. There are also a high number of intriguing, yet poorly examined, published and unpublished documents, old plans, drawings and photos of monuments preserved in Cultural Heritage of Iran Organization which will be consulted.Keywords: Iran, Islamic architecture, early Islamic Iran, early Islamic architecture, politicized art, political legitimacy, propaganda, aesthetics
Procedia PDF Downloads 118765 Hospital Wastewater Treatment by Ultrafiltration Membrane System
Authors: Selin Top, Raul Marcos, M. Sinan Bilgili
Abstract:
Although there have been several studies related to collection, temporary storage, handling and disposal of solid wastes generated by hospitals, there are only a few studies related to liquid wastes generated by hospitals or hospital wastewaters. There is an important amount of water consumptions in hospitals. While minimum domestic water consumption per person is 100 L/day, water consumption per bed in hospitals is generally ranged between 400-1200 L. This high amount of consumption causes high amount of wastewater. The quantity of wastewater produced in a hospital depends on different factors: bed numbers, hospital age, accessibility to water, general services present inside the structure (kitchen, laundry, laboratory, diagnosis, radiology, and air conditioning), number and type of wards and units, institution management policies and awareness in managing the structure in safeguarding the environment, climate and cultural and geographic factors. In our country, characterization of hospital wastewaters conducted by classical parameters in a very few studies. However, as mentioned above, this type of wastewaters may contain different compounds than domestic wastewaters. Hospital Wastewater (HWW) is wastewater generated from all activities of the hospital, medical and non medical. Nowadays, hospitals are considered as one of the biggest sources of wastewater along with urban sources, agricultural effluents and industrial sources. As a health-care waste, hospital wastewater has the same quality as municipal wastewater, but may also potentially contain various hazardous components due to using disinfectants, pharmaceuticals, radionuclides and solvents making not suitable the connection of hospital wastewater to the municipal sewage network. These characteristics may represent a serious health hazard and children, adults and animals all have the potential to come into contact with this water. Therefore, the treatment of hospital wastewater is an important current interest point to focus on. This paper aims to approach on the investigation of hospital wastewater treatment by membrane systems. This study aim is to determined hospital wastewater’s characterization and also evaluates the efficiency of hospital wastewater treatment by high pressure filtration systems such as ultrafiltration (UF). Hospital wastewater samples were taken directly from sewage system from Şişli Etfal Training and Research Hospital, located in the district of Şişli, in the European part of Istanbul. The hospital is a 784 bed tertiary care center with a daily outpatient department of 3850 patients. Ultrafiltration membrane is used as an experimental treatment and the influence of the pressure exerted on the membranes was examined, ranging from 1 to 3 bar. The permeate flux across the membrane was observed to define the flooding membrane points. The global COD and BOD5 removal efficiencies were 54% and 75% respectively for ultrafiltration, all the SST removal efficiencies were above 90% and a successful removal of the pathological bacteria measured was achieved.Keywords: hospital wastewater, membrane, ultrafiltration, treatment
Procedia PDF Downloads 304764 Healthcare Fire Disasters: Readiness, Response and Resilience Strategies: A Real-Time Experience of a Healthcare Organization of North India
Authors: Raman Sharma, Ashok Kumar, Vipin Koushal
Abstract:
Healthcare facilities are always seen as places of haven and protection for managing the external incidents, but the situation becomes more difficult and challenging when such facilities themselves are affected from internal hazards. Such internal hazards are arguably more disruptive than external incidents affecting vulnerable ones, as patients are always dependent on supportive measures and are neither in a position to respond to such crisis situation nor do they know how to respond. The situation becomes more arduous and exigent to manage if, in case critical care areas like Intensive Care Units (ICUs) and Operating Rooms (OR) are convoluted. And, due to these complexities of patients’ in-housed there, it becomes difficult to move such critically ill patients on immediate basis. Healthcare organisations use different types of electrical equipment, inflammable liquids, and medical gases often at a single point of use, hence, any sort of error can spark the fire. Even though healthcare facilities face many fire hazards, damage caused by smoke rather than flames is often more severe. Besides burns, smoke inhalation is primary cause of fatality in fire-related incidents. The greatest cause of illness and mortality in fire victims, particularly in enclosed places, appears to be the inhalation of fire smoke, which contains a complex mixture of gases in addition to carbon monoxide. Therefore, healthcare organizations are required to have a well-planned disaster mitigation strategy, proactive and well prepared manpower to cater all types of exigencies resulting from internal as well as external hazards. This case report delineates a true OR fire incident in Emergency Operation Theatre (OT) of a tertiary care multispecialty hospital and details the real life evidence of the challenges encountered by OR staff in preserving both life and property. No adverse event was reported during or after this fire commotion, yet, this case report aimed to congregate the lessons identified of the incident in a sequential and logical manner. Also, timely smoke evacuation and preventing the spread of smoke to adjoining patient care areas by opting appropriate measures, viz. compartmentation, pressurisation, dilution, ventilation, buoyancy, and airflow, helped to reduce smoke-related fatalities. Henceforth, precautionary measures may be implemented to mitigate such incidents. Careful coordination, continuous training, and fire drill exercises can improve the overall outcomes and minimize the possibility of these potentially fatal problems, thereby making a safer healthcare environment for every worker and patient.Keywords: healthcare, fires, smoke, management, strategies
Procedia PDF Downloads 68763 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions
Authors: Peter Carter
Abstract:
Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”Keywords: climate change, climate change indicators, climate change trends, climate system change interactions
Procedia PDF Downloads 105762 The Strategic Gas Aggregator: A Key Legal Intervention in an Evolving Nigerian Natural Gas Sector
Authors: Olanrewaju Aladeitan, Obiageli Phina Anaghara-Uzor
Abstract:
Despite the abundance of natural gas deposits in Nigeria and the immense potential, this presents both for the domestic and export oriented revenue, there exists an imbalance in the preference for export as against the development and optimal utilization of natural gas for the domestic industry. Considerable amounts of gas are still being wasted by flaring in the country to this day. Although the government has set in place initiatives to harness gas at the flare and thereby reduce volumes flared, the gas producers would rather direct the gas produced to the export market whereas gas apportioned to the domestic market is often marred by the low domestic gas price which is often discouraging to the gas producers. The exported fraction of gas production no doubt yields healthy revenues for the government and an encouraging return on investment for the gas producers and for this reason export sales remain enticing and preferable to the domestic sale of gas. This export pull impacts negatively if left unchecked, on the domestic market which is in no position to match the price at the international markets. The issue of gas price remains critical to the optimal development of the domestic gas industry, in that it comprises the basis for investment decisions of the producers on the allocation of their scarce resources and to what project to channel their output in order to maximize profit. In order then to rebalance the domestic industry and streamline the market for gas, the Gas Aggregation Company of Nigeria, also known as the Strategic Aggregator was proposed under the Nigerian Gas Master Plan of 2008 and then established pursuant to the National Gas Supply and Pricing Regulations of 2008 to implement the domestic gas supply obligation which focuses on ramping-up gas volumes for domestic utilization by mandatorily requiring each gas producer to dedicate a portion of its gas production for domestic utilization before having recourse to the export market. The 2008 Regulations further stipulate penalties in the event of non-compliance. This study, in the main, assesses the adequacy of the legal framework for the Nigerian Gas Industry, given that the operational laws are structured more for oil than its gas counterpart; examine the legal basis for the Strategic Aggregator in the light of the Domestic Gas Supply and Pricing Policy 2008 and the National Domestic Gas Supply and Pricing Regulations 2008 and makes a case for a review of the pivotal role of the Aggregator in the Nigerian Gas market. In undertaking this assessment, the doctrinal research methodology was adopted. Findings from research conducted reveal the reawakening of the Federal Government to the immense potential of its gas industry as a critical sector of its economy and the need for a sustainable domestic natural gas market. A case for the review of the ownership structure of the Aggregator to comprise a balanced mix of the Federal Government, gas producers and other key stakeholders in order to ensure the effective implementation of the domestic supply obligations becomes all the more imperative.Keywords: domestic supply obligations, natural gas, Nigerian gas sector, strategic gas aggregator
Procedia PDF Downloads 227761 'Marching into the Classroom' a Second Career in Education for Ex-Military Personnel
Authors: Mira Karnieli, Shosh Veitzman
Abstract:
In recent years, due to transitions in teacher education, professional identities are changing. In many countries, the education system is absorbing ex-military personnel. The aim of this research is to investigate the phenomenon of retired officers in Israel who choose education as a second career and the training provided. The phenomenon of retired military permanent-service officers pursuing a career in education is not unique to Israel. In the United States and the United Kingdom, for example, government-supported accelerated programs (Troops to Teachers) are run for ex-military personnel (soldiers and officers) with a view to their entry into the education system. These programs direct the ex-military personnel to teacher education and training courses to obtain teaching certification. The present study, however, focused specifically on senior officers who have a full academic education, most of the participants hold second degrees in a variety of fields. They all retired from a rich military career, including roles in command, counseling, training, guidance, and management. The research included 80 participants' men and women. Data was drowning from in-depth interviews and questioner. The conceptual framework which guided this study was mixed methods. The qualitative-phenomenological methodology, using in-depth interviews, and a questioner. The study attempted to understand the motives and personal perceptions behind the choice of teaching. Were they able to identify prior skills that they had accumulated throughout their years of service? What were these skills? In addition, which (if any) would stand them in good stead for a career in teaching? In addition, they were asked how they perceived the training program’s contribution to their professionalization and integration in the education system. The data was independently coded by the researchers. Subsequently, the data was discussed by both researchers, codes were developed, and conceptual categories were formed. Analysis of the data shows this population to be characterized by the high motivation for studying, professionalization, contribution to society and a deep sense of commitment to education. All of them had a profession which they acquired in the past which is not related to education. However, their motives for choosing to teach are related to their wish to give expression to their leadership experience and ability, the desire to have an influence and to bring about change. This is derived from personal commitment, as well as from a worldview and value system that are supportive of education. In other words, they feel committed and act out of a sense of vocation. In conclusion, it will emphasize that all the research participants began working in education immediately upon completing the training program. They perceived this path as a way of realizing a mission despite the low status of the teaching profession in Israel and low teacher salaries.Keywords: cross-boundary skills, lifelong learning, professional identities, teaching as a second career, training program
Procedia PDF Downloads 198760 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 114759 Evidence-Based Policy Making to Improve Human Security in Pakistan
Authors: Ayesha Akbar
Abstract:
Pakistan is moving from a security state to a welfare state despite several security challenges both internal and external. Human security signifies a varied approach in different regions depending upon the leadership and policy priorities. The link between human development and economic growth is not automatic. It has to be created consciously by forward-looking policies and strategies by national governments. There are seven components or categories of human security these include: Economic Security, Personal Security, Health Security, Environmental Security, Food Security, Community Security and Political Security. The increasing interest of the international community to clearly understand the dimensions of human security provided the grounds to Pakistani scholars as well to ponder on the issue and delineate lines of human security. A great deal of work has been either done or in process to evaluate human security indicators in Pakistan. Notwithstanding, after having been done a great deal of work the human security in Pakistan is not satisfactory. A range of deteriorating indicators of human development that lies under the domain of human security leaves certain inquiries to be answered. What are the dimensions of human security in Pakistan? And how are they being dealt from the perspective of policy and institution in terms of its operationalization in Pakistan? Is the human security discourse reflects evidence-based policy changes. The methodology is broadly based on qualitative methods that include interviews, content analysis of policy documents. Pakistan is among the most populous countries in the world and faces high vulnerability to climate change. Literacy rate has gone down with the surge of youth bulge to accommodate in the job market. Increasing population is creating food problems as the resources have not been able to compete with the raising demands of food and other social amenities of life. Majority of the people are facing acute poverty. Health outcomes are also not satisfactory with the high infant and maternal mortality rate. Pakistan is on the verge of facing water crisis as the water resources are depleting so fast with the high demand in agriculture and energy sector. Pakistan is striving hard to deal with the declining state of human security but the dilemma is lack of resources that hinders in meeting up with the emerging demands. The government requires to bring about more change with scaling-up economic growth avenues with enhancing the capacity of human resources. A modern performance drive culture with the integration of technology is required to deliver efficient and effective service delivery. On an already fast track process of reforms; e-governance and evidence based policy mechanism is being instilled in the government process for better governance and evidence based decisions.Keywords: governance, human development index, human security, Pakistan, policy
Procedia PDF Downloads 253