Search results for: digital image
2595 Surgical Imaging in Ancient Egypt
Authors: Haitham Nabil Zaghlol Hasan
Abstract:
This research aims to study of the surgery science and imaging in ancient Egypt and how to diagnose the surgical cases, whether due to injuries or disease that requires surgical intervention, Medical diagnosis and how to treat it. The ancient Egyptian physician tried to change over from magic and theological thinking to become a stand-alone experimental science, they were able to distinguish between diseases, and they divide them into internal and external diseases even though this division exists to date in modern medicine. There is no evidence to recognize the amount of human knowledge in the prehistoric knowledge of medicine and surgery except skeleton. It is not far from the human being in those times familiar with some means of treatment, Surgery in the Stone age was rudimentary, Flint stone was used after trimming in a certain way as a lancet to slit and open the skin. Wooden tree branches were used to make splints to treat bone fractures. Surgery developed further when copper was discovered, it led to the advancement of Egyptian civilization, then modern and advanced tools appeared in the operating theater, like a knife or a scalpel, there is evidence of surgery performed in ancient Egypt during the dynastic period (323 – 3200 BC). The climate and environmental conditions have preserved medical papyri and human remains that have confirmed their knowledge of surgical methods, including sedation. The ancient Egyptians reached great importance in surgery, evidenced by the scenes that depict the pathological image and the surgical process, but the image alone is not sufficient to prove the pathology, its presence in ancient Egypt and its treatment method. As there are a number of medical papyri, especially Edwin Smith and Ebris, which prove the ancient Egyptian surgeon's knowledge of the pathological condition that It requires surgical intervention, otherwise, its diagnosis and the method of treatment will not be described with such accuracy through these texts. Some surgeries are described in the department of surgery at Ebris papyrus (recipes from 863 to 877). The level of surgery in ancient Egypt was high, and they performed surgery such as hernias and Aneurysm, however, we have not received a lengthy explanation of the various surgeries, and the surgeon has usually only said: “treated surgically”. It is evident in the Ebris papyrus that they used sharp surgical tools and cautery in operations where bleeding is expected, such as hernias, arterial sacs and tumors.Keywords: egypt, ancient_egypt, civilization, archaeology
Procedia PDF Downloads 702594 Inappropriate Effects Which the Use of Computer and Playing Video Games Have on Young People
Authors: Maja Ruzic-Baf, Mirjana Radetic-Paic
Abstract:
The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games. The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games.Keywords: addiction to video games, behaviour, ICT, young people
Procedia PDF Downloads 5472593 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3332592 Riesz Mixture Model for Brain Tumor Detection
Authors: Mouna Zitouni, Mariem Tounsi
Abstract:
This research introduces an application of the Riesz mixture model for medical image segmentation for accurate diagnosis and treatment of brain tumors. We propose a pixel classification technique based on the Riesz distribution, derived from an extended Bartlett decomposition. To our knowledge, this is the first study addressing this approach. The Expectation-Maximization algorithm is implemented for parameter estimation. A comparative analysis, using both synthetic and real brain images, demonstrates the superiority of the Riesz model over a recent method based on the Wishart distribution.Keywords: EM algorithm, segmentation, Riesz probability distribution, Wishart probability distribution
Procedia PDF Downloads 212591 The Impact of Social Customer Relationship Management on Brand Loyalty and Reducing Co-Destruction of Value by Customers
Authors: Sanaz Farhangi, Habib Alipour
Abstract:
The main objective of this paper is to explore how social media as a critical platform would increase the interactions between the tourism sector and stakeholders. Nowadays, human interactions through social media in many areas, especially in tourism, provide various experiences and information that users share and discuss. Organizations and firms can gain customer loyalty through social media platforms, albeit consumers' negative image of the product or services. Such a negative image can be reduced through constant communication between produces and consumers, especially with the availability of the new technology. Therefore, effective management of customer relationships in social media creates an extraordinary opportunity for organizations to enhance value and brand loyalty. In this study, we seek to develop a conceptual model for addressing factors such as social media, SCRM, and customer engagement affecting brand loyalty and diminish co-destruction. To support this model, we scanned the relevant literature using a comprehensive category of ideas in the context of marketing and customer relationship management. This will allow exploring whether there is any relationship between social media, customer engagement, social customer relationship management (SCRM), co-destruction, and brand loyalty. SCRM has been explored as a moderating factor in the relationship between customer engagement and social media to secure brand loyalty and diminish co-destruction of the company’s value. Although numerous studies have been conducted on the impact of social media on customers and marketing behavior, there are limited studies for investigating the relationship between SCRM, brand loyalty, and negative e-WOM, which results in the reduction of the co-destruction of value by customers. This study is an important contribution to the tourism and hospitality industry in orienting customer behavior in social media using SCRM. This study revealed that through social media platforms, management can generate discussion and engagement about the product and services, which facilitates customers feeling in an appositive way towards the firm and its product. Study has also revealed that customers’ complaints through social media have a multi-purpose effect; it can degrade the value of the product, but at the same time, it will motivate the firm to overcome its weaknesses and correct its shortcomings. This study has also implications for the managers and practitioners, especially in the tourism and hospitality sector. Future research direction and limitations of the research were also discussed.Keywords: brand loyalty, co-destruction, customer engagement, SCRM, tourism and hospitality
Procedia PDF Downloads 1172590 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 972589 Assessment of Seeding and Weeding Field Robot Performance
Authors: Victor Bloch, Eerikki Kaila, Reetta Palva
Abstract:
Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.Keywords: agricultural robot, field robot, plant detection, robot performance
Procedia PDF Downloads 882588 Chloroform-Formic Acid Solvent Systems for Nanofibrous Polycaprolactone Webs
Authors: I. Yalcin Enis, J. Vojtech, T. Gok Sadikoglu
Abstract:
In this study, polycaprolactone (PCL) was dissolved in chloroform: ethanol solvent system at a concentration of 18 w/v %. 1, 2, 4, and 6 droplets of formic acid were added to the prepared 10ml PCL-chloroform:ethanol solutions separately. Fibrous webs were produced by electrospinning technique. Morphology of the webs was investigated by using scanning electron microscopy (SEM) whereas fiber diameters were measured by Image J Software System. The effect of formic acid addition to the mostly used chloroform solvent on fiber morphology was examined.Keywords: chloroform, electrospinning, formic acid polycaprolactone, fiber
Procedia PDF Downloads 2782587 Gender Differences in Attitudes to Technology in Primary Education
Authors: Radek Novotný, Martina Maněnová
Abstract:
This article presents a summary of reviews on gender differences in perception of information and communication technology (ICT) by pupils in primary education. The article outlines the meaning of ICT in primary education then summarizes different studies of the use of ICT in primary education from the point of view of gender. The article also presents the specific differences of gender in the knowledge of modalities of use of specialized digital tools and the perception and value assigned to ICT, accordingly the article provides insight into the background of gender differences in performance in relation to ICT to determinate the complex meaning of pupils attitudes to the ICT.Keywords: ICT in primary education, attitudes to ICT, gender differences, gender and ICT
Procedia PDF Downloads 4852586 Media Effects in Metamodernity
Authors: D. van der Merwe
Abstract:
Despite unprecedented changes in the media formats, typologies, delivery channels, and content that can be seen between Walter Benjamin’s writings from the era of modernity and those observable in the contemporary era of metamodernity, parallels can be drawn between the media effects experienced by audiences across the temporal divide. This paper will explore alignments between these two eras as evidenced by various media effects. First, convergence in the historical paradigm of film will be compared with the same effect as seen within the digital domain. Second, the uses and gratifications theory will be explored to delineate parallels in terms of user behaviours across both eras, regardless of medium. Third, cultivation theory and its role in manipulation via the media in both modernity and metamodernity will be discussed. Lastly, similarities between the archetypal personae populating each era will be unpacked.Keywords: convergence, cultivation theory, media effects, metamodernity, uses and gratifications theory
Procedia PDF Downloads 212585 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 1192584 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 1322583 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case
Authors: Andrea Ando
Abstract:
The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law
Procedia PDF Downloads 912582 Parallel Version of Reinhard’s Color Transfer Algorithm
Authors: Abhishek Bhardwaj, Manish Kumar Bajpai
Abstract:
An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm.Keywords: Reinhard et al’s algorithm, color transferring, parallelism, speedup
Procedia PDF Downloads 6162581 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 972580 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 2942579 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 3312578 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1372577 Pay Per Click Attribution: Effects on Direct Search Traffic and Purchases
Authors: Toni Raurich-Marcet, Joan Llonch-Andreu
Abstract:
This research is focused on the relationship between Search Engine Marketing (SEM) and traditional advertising. The dominant assumption is that SEM does not help brand awareness and only does it in session as if it were the cost of manufacturing the product being sold. The study is methodologically developed using an experiment where the effects were determined to analyze the billboard effect. The research allowed the cross-linking of theoretical and empirical knowledge on digital marketing. This paper has validated this marketing generates retention as traditional advertising would by measuring brand awareness and its improvements. This changes the way performance and brand campaigns are split within marketing departments, effectively rebalancing budgets moving forward.Keywords: attribution, performance marketing, SEM, marketplaces
Procedia PDF Downloads 1322576 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 342575 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 762574 Motion Effects of Arabic Typography on Screen-Based Media
Authors: Ibrahim Hassan
Abstract:
Motion typography is one of the most important types of visual communication based on display. Through the digital display media, we can control the text properties (size, direction, thickness, color, etc.). The use of motion typography in visual communication made it have several images. We need to adjust the terminology and clarify the different differences between them, so relying on the word motion typography -considered a general term- is not enough to separate the different communicative functions of the moving text. In this paper, we discuss the different effects of motion typography on Arabic writing and how we can achieve harmony between the movement and the letterform, and we will, during our experiments, present a new type of text movement.Keywords: Arabic typography, motion typography, kinetic typography, fluid typography, temporal typography
Procedia PDF Downloads 1642573 Application of Low-order Modeling Techniques and Neural-Network Based Models for System Identification
Authors: Venkatesh Pulletikurthi, Karthik B. Ariyur, Luciano Castillo
Abstract:
The system identification from the turbulence wakes will lead to the tactical advantage to prepare and also, to predict the trajectory of the opponents’ movements. A low-order modeling technique, POD, is used to predict the object based on the wake pattern and compared with pre-trained image recognition neural network (NN) to classify the wake patterns into objects. It is demonstrated that low-order modeling, POD, is able to predict the objects better compared to pretrained NN by ~30%.Keywords: the bluff body wakes, low-order modeling, neural network, system identification
Procedia PDF Downloads 1852572 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 722571 The Ethics of Documentary Filmmaking Discuss the Ethical Considerations and Responsibilities of Documentary Filmmakers When Portraying Real-life Events and Subjects
Authors: Batatunde Kolawole
Abstract:
Documentary filmmaking stands as a distinctive medium within the cinematic realm, commanding a unique responsibility the portrayal of real-life events and subjects. This research delves into the profound ethical considerations and responsibilities that documentary filmmakers shoulder as they embark on the quest to unveil truth and weave compelling narratives. In the exploration, they embark on a comprehensive review of ethical frameworks and real-world case studies, illuminating the intricate web of challenges that documentarians confront. These challenges encompass an array of ethical intricacies, from securing informed consent to safeguarding privacy, maintaining unwavering objectivity, and sidestepping the snares of narrative manipulation when crafting stories from reality. Furthermore, they dissect the contemporary ethical terrain, acknowledging the emergence of novel dilemmas in the digital age, such as deepfakes and digital alterations. Through a meticulous analysis of ethical quandaries faced by distinguished documentary filmmakers and their strategies for ethical navigation, this study offers invaluable insights into the evolving role of documentaries in molding public discourse. They underscore the indispensable significance of transparency, integrity, and an indomitable commitment to encapsulating the intricacies of reality within the realm of ethical documentary filmmaking. In a world increasingly reliant on visual narratives, an understanding of the subtle ethical dimensions of documentary filmmaking holds relevance not only for those behind the camera but also for the diverse audiences who engage with and interpret the realities unveiled on screen. This research stands as a rigorous examination of the moral compass that steers this potent form of cinematic expression. It emphasizes the capacity of ethical documentary filmmaking to enlighten, challenge, and inspire, all while unwaveringly upholding the core principles of truthfulness and respect for the human subjects under scrutiny. Through this holistic analysis, they illuminate the enduring significance of upholding ethical integrity while uncovering the truths that shape our world. Ethical documentary filmmaking, as exemplified by "Rape" and countless other powerful narratives, serves as a testament to the enduring potential of cinema to inform, challenge, and drive meaningful societal discourse.Keywords: filmmaking, documentary, human right, film
Procedia PDF Downloads 682570 Community Observatory for Territorial Information Control and Management
Authors: A. Olivi, P. Reyes Cabrera
Abstract:
Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.Keywords: accessibility, ageing, city, digital technology, local governance
Procedia PDF Downloads 1332569 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text
Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert
Abstract:
This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies
Procedia PDF Downloads 1712568 Learning Resources as Determinants for Improving Teaching and Learning Process in Nigerian Universities
Authors: Abdulmutallib U. Baraya, Aishatu M. Chadi, Zainab A. Aliyu, Agatha Samson
Abstract:
Learning Resources is the field of study that investigates the process of analyzing, designing, developing, implementing, and evaluating learning materials, learners, and the learning process in order to improve teaching and learning in university-level education essential for empowering students and various sectors of Nigeria’s economy to succeed in a fast-changing global economy. Innovation in the information age of the 21st century is the use of educational technologies in the classroom for instructional delivery, it involves the use of appropriate educational technologies like smart boards, computers, projectors and other projected materials to facilitate learning and improve performance. The study examined learning resources as determinants for improving the teaching and learning process in Abubakar Tafawa Balewa University (ATBU), Bauchi, Bauchi state of Nigeria. Three objectives, three research questions and three null hypotheses guided the study. The study adopted a Survey research design. The population of the study was 880 lecturers. A sample of 260 was obtained using the research advisor table for determining sampling, and 250 from the sample was proportionately selected from the seven faculties. The instrument used for data collection was a structured questionnaire. The instrument was subjected to validation by two experts. The reliability of the instrument stood at 0.81, which is reliable. The researchers, assisted by six research assistants, distributed and collected the questionnaire with a 75% return rate. Data were analyzed using mean and standard deviation to answer the research questions, whereas simple linear regression was used to test the null hypotheses at a 0.05 level of significance. The findings revealed that physical facilities and digital technology tools significantly improved the teaching and learning process. Also, consumables, supplies and equipment do not significantly improve the teaching and learning process in the faculties. It was recommended that lecturers in the various faculties should strengthen and sustain the use of digital technology tools, and there is a need to strive and continue to properly maintain the available physical facilities. Also, the university management should, as a matter of priority, continue to adequately fund and upgrade equipment, consumables and supplies frequently to enhance the effectiveness of the teaching and learning process.Keywords: education, facilities, learning-resources, technology-tools
Procedia PDF Downloads 262567 Post 2014 Afghanistan and Its Implications on Pakistan
Authors: Naad-E-Ali Sulehria
Abstract:
This paper unfolds the facts and findings of Afghan scenario particularly its implications on Pakistan. At present, the Post 2014 withdrawal of US and ISAF combat forces from Afghan land is one of the up-to-the-minute issues among analysts of international relations. Deliberating from the current situation of Afghanistan towards its future prospects and the elements vibrating Afghanistan's internal dynamics, as well as exploitation of its resources by other states and non-state actors, are discussed accordingly. Moreover, the reasons behind such a paradigm shift in US foreign policy are tried to be contemplated with first hand knowledge. It is investigated that 'what is the current image of Afghanistan in today's world?', 'what will be its future aspects?', and 'what sort of Afghanistan does Pakistan foresees' as the concerned area of discussion.Keywords: Afghanistan, Pakistan, new great game, taliban
Procedia PDF Downloads 3012566 Impact of Non-Parental Early Childhood Education on Digital Friendship Tendency
Authors: Sheel Chakraborty
Abstract:
Modern society in developed countries has distanced itself from the earlier norm of joint family living, and with the increase of economic pressure, parents' availability for their children during their infant years has been consistently decreasing over the past three decades. During the same time, the pre-primary education system - built mainly on the developmental psychology theory framework of Jean Piaget and Lev Vygotsky, has been promoted in the US through the legislature and funding. Early care and education may have a positive impact on young minds, but a growing number of kids facing social challenges in making friendships in their teenage years raises serious concerns about its effectiveness. The survey-based primary research presented here shows a statistically significant number of millennials between the ages of 10 and 25 prefer to build friendships virtually than face-to-face interactions. Moreover, many teenagers depend more on their virtual friends whom they never met. Contrary to the belief that early social interactions in a non-home setup make the kids confident and more prepared for the real world, many shy-natured kids seem to develop a sense of shakiness in forming social relationships, resulting in loneliness by the time they are young adults. Reflecting on George Mead’s theory of self that is made up of “I” and “Me”, most functioning homes provide the required freedom and forgivable, congenial environment for building the "I" of a toddler; however, daycare or preschools can barely match that. It seems social images created from the expectations perceived by preschoolers “Me" in a non-home setting may interfere and greatly overpower the formation of a confident "I" thus creating a crisis around the inability to form friendships face to face when they grow older. Though the pervasive nature of social media can’t be ignored, the non-parental early care and education practices adopted largely by the urban population have created a favorable platform of teen psychology on which social media popularity thrived, especially providing refuge to shy Gen-Z teenagers. This can explain why young adults today perceive social media as their preferred outlet of expression and a place to form dependable friendships, despite the risk of being cyberbullied.Keywords: digital socialization, shyness, developmental psychology, friendship, early education
Procedia PDF Downloads 129